Compare commits

..

109 Commits
2.3.3 ... 2.3.5

Author SHA1 Message Date
holger krekel
8c7ae7f7a5 release 2.3.5 packaging 2013-04-30 12:26:30 +02:00
holger krekel
05c4ecf892 fix recursion within import hook and source.decode in particular 2013-04-30 12:05:58 +02:00
holger krekel
c5f9958783 never consider a fixture function for test function collection 2013-04-29 10:31:51 +02:00
Floris Bruynooghe
7a90515d49 Treat frozenset as a set
Thanks to Brianna Laugher.
2013-04-28 20:59:10 +01:00
Floris Bruynooghe
3ab94544b9 Ingore rope auto-generated files 2013-04-28 20:57:52 +01:00
Floris Bruynooghe
3c317dc35e Minor style cleanup 2013-04-28 20:56:56 +01:00
holger krekel
b2cb93e06d allow re-running of a test item (as exercised by the
pytest-rerunfailures plugins) by re-initializing and removing
request/funcargs information in runtestprotocol() - which is a slightly
odd place to add funcarg-related functionality but it allows all
pytest_runtest_setup/teardown hooks to properly see a valid
request/funcarg content on test items.
2013-04-22 10:35:48 +02:00
Ronny Pfannschmidt
cf7cae0780 pdb plugin: move entering pdb into a toplevel function
this prepares pdb at collect time
2013-04-18 11:18:24 +02:00
Ronny Pfannschmidt
55c349a9eb charify pdb visible stack end finding by turning it into a function 2013-04-16 10:19:20 +02:00
Ronny Pfannschmidt
73446e98be turn the postmortem traceback selection to a function 2013-04-16 10:18:08 +02:00
holger krekel
0bc98eb9d2 add to changelog: put captured stdout/stderr into junitxml output even
for passing tests (thanks Adam Goucher)
2013-04-16 09:14:47 +02:00
holger krekel
bfe9779b37 merge 2013-04-16 09:13:58 +02:00
holger krekel
bb6f3ebd31 slightly improve -k help string
cosmetic change to test_nose.py
2013-04-16 09:04:05 +02:00
holger krekel
ee69b43c7a Merged in adamgoucher/pytest (pull request #29)
stdout/stderr now captured by junitxml
2013-04-16 09:02:08 +02:00
Ronny Pfannschmidt
63a6936d82 move pdb plugin post morten traceback selection to a own function
this is preparation for making it resillent against broken envs
that can't import doctest
2013-04-16 08:46:55 +02:00
Adam Goucher
1cbd2db621 stdout/stderr now captured by junitxml 2013-04-16 00:45:14 -04:00
holger krekel
94aa76fec0 fix reference 2013-04-04 14:36:44 +02:00
Sasha Hart
265a4de06e doc fix: 'x' should be '-x' so it is not interpreted as a filename 2013-04-03 14:51:06 -05:00
holger krekel
712898cfe1 - add release announce 2013-03-28 10:21:03 +01:00
Floris Bruynooghe
f31dc7a8b7 Attempt to improve detailed failure reporting
* If --verbose is used do not truncate.

* Add a special dict comparison instead of diffing
  pprint output.
2013-03-28 01:39:01 +00:00
Ronny Pfannschmidt
9c9679945e fix Issue 265 - integrate nose setup/teardown with setupstate
as sideeffect teardown is only called if setup doesnt fail
2013-03-25 10:52:02 +01:00
Ronny Pfannschmidt
ba79c1926c add a test for issue 14 that will xfail on python < 2.7 2013-03-25 08:53:08 +01:00
Ronny Pfannschmidt
76fb51a4ba fix issue 271 - dont write junitxml on slave nodes 2013-03-24 20:43:25 +01:00
Ronny Pfannschmidt
93da606763 fix Issue 274 - dont fail when doctest does not know the example location
instead only the last test is shown, this could use some further enhancement
2013-03-24 20:05:29 +01:00
Benjamin Peterson
5e479c94ce disable assertion rewriting on CPython 2.6.0 because of bugs (fixes #280) 2013-03-21 12:19:01 -05:00
holger krekel
1884be0121 added changelog entry for getfixture() for doctests 2013-03-21 12:41:39 +01:00
holger krekel
8f8466ee40 Merged in witsch/pytest/doctest-fixtures (pull request #25)
fixture support in doctests
2013-03-21 12:33:43 +01:00
Andreas Zeidler
dfcb0e322c rename get_fixture to getfixture to better match the current API style
--HG--
branch : doctest-fixtures
2013-03-21 12:04:14 +01:00
Andreas Zeidler
da3b42ce46 remove debugging left-overs
--HG--
branch : doctest-fixtures
2013-03-21 01:03:59 +01:00
Andreas Zeidler
fa9bd8443f update the documentation regarding the get_fixture helper
please note that the japanese translation was done using "google translate" and should probably be checked again... :)

--HG--
branch : doctest-fixtures
2013-03-20 17:54:38 +01:00
Andreas Zeidler
5a3547dd7e also provide get_fixture helper for module level doctests
--HG--
branch : doctest-fixtures
2013-03-20 17:32:48 +01:00
Andreas Zeidler
c4b3a09886 test get_fixture helper for doctests
--HG--
branch : doctest-fixtures
2013-03-20 17:14:28 +01:00
Andreas Zeidler
f747d363b0 don't expose the FixtureRequest object itself in doctests. in most cases get_fixture is sufficient, and you can always call get_fixture('request') anyway
--HG--
branch : doctest-fixtures
2013-03-20 16:36:48 +01:00
Benjamin Peterson
65c69a34ac python 2.4 compatibility 2013-03-16 20:08:01 -07:00
Takafumi Arakaki
5ba2a7f628 Add texinfo build target to doc/*/Makefile 2013-03-10 07:25:14 +01:00
Benjamin Peterson
0cf79b29cd in the default Python 2 case, manually check the source is ASCII (fixes #269) 2013-03-08 10:44:41 -05:00
Floris Bruynooghe
6d1662e4b7 Use py.builtin._basestring 2013-02-15 13:38:40 +00:00
Floris Bruynooghe
850fd2b7f7 Mention fix of issue 266 in changelog 2013-02-15 13:28:26 +00:00
Floris Bruynooghe
48e6aa9dc7 Allow MarkEvaluator expressions to be unicode
This fixes issue #266.
2013-02-15 11:47:48 +00:00
Ronny Pfannschmidt
0dd05023b8 fix issue 251 - report a skip instead of ignoring classes with init 2013-02-15 10:18:00 +01:00
Ronny Pfannschmidt
aeba66ac6a fix typo in link 2013-02-14 14:15:13 +01:00
Ronny Pfannschmidt
d23f9fab46 update changelog 2013-02-14 13:17:05 +01:00
Ronny Pfannschmidt
69ef750091 fix issue134 - print the collect errors that prevent running specified test items 2013-02-14 12:21:42 +01:00
Ronny Pfannschmidt
ca8b3c2307 unify logic for error exit on test failures 2013-02-14 12:13:04 +01:00
holger krekel
857c99d354 fix py32 incompatible syntax 2013-02-14 12:17:23 +01:00
holger krekel
3785f1aae3 make dev pytest depend on installing from pypi.testrun.org 2013-02-14 11:57:32 +01:00
holger krekel
d0e18ac63f issue250 unicode/str mixes in parametrization names and values now works 2013-02-12 23:30:34 +01:00
holger krekel
296f752cca fix --genscript option to generate standalone scripts that also
work with python3.3 (importer ordering)
2013-02-12 22:59:29 +01:00
holger krekel
456731ed0f fix issue257 assertion-triggered compilation of source ending in a
comment line doesn't blow up in python2.5 (fixed through py>=1.4.13.dev6)
2013-02-12 22:43:33 +01:00
holger krekel
c8653b4c02 merge 2013-02-12 20:45:01 +01:00
holger krekel
e7a86caac2 strike python3.1 tox testing, 3.2 and 3.3 is enough 2013-02-12 20:44:04 +01:00
Ronny Pfannschmidt
162c3689c6 fix issue 260 - don't use nose specials on plain unittest cases 2013-02-07 17:53:13 +01:00
Ronny Pfannschmidt
b94c3084a6 small line length fix in nose plugin call optional 2013-02-07 10:41:07 +01:00
holger krekel
570ad36eaf fix parametrized testid to provide for uniqueness 2013-02-05 17:41:45 +01:00
holger krekel
9d107523a1 py3 fixes 2013-02-04 16:07:51 +01:00
holger krekel
06ab38a2fc strip old comment and hack 2013-02-03 20:47:39 +01:00
holger krekel
e007f2dc54 add note on leipzig course in june 2013 2013-02-02 20:15:01 +01:00
Andreas Zeidler
25547e3afb pass fixture request object (and convenience shortcut to get fixtures) into doctest files
--HG--
branch : doctest-fixtures
2013-01-30 17:32:37 +01:00
Ronny Pfannschmidt
64e6c71bf6 merge 2013-01-27 02:10:52 +01:00
Ronny Pfannschmidt
80f590288b add some bits to ISSUES 2013-01-27 02:10:29 +01:00
Ronny Pfannschmidt
570688f701 ensure OutcomeExceptions like skip/fail have initialized exception attributes 2013-01-27 02:06:19 +01:00
holger krekel
c5f587d6db don't test on py24 for now because tox/virtualenv-1.8 does not support
python2.4
2013-01-26 14:49:33 +01:00
holger krekel
ee713ad036 add Brian Okken's blog post as a tutorial 2013-01-21 09:04:01 +01:00
Floris Bruynooghe
51b40dd22c Add isolation plugin as a feature 2013-01-16 17:09:17 +00:00
Benjamin Peterson
65edf87ea6 display the repr of some global names (fixes #171) 2013-01-10 11:59:08 -06:00
holger krekel
4d4b551079 adapt locations of ML to new @python.org location 2012-12-27 16:48:17 +01:00
holger krekel
e13fedc256 fix pylib links 2012-12-27 16:48:14 +01:00
holger krekel
97f9bc2e46 fix/enhance example 2012-12-20 15:57:07 +01:00
holger krekel
d0bf65e6c8 adding an example on how to do interact with the list of collected tests once before any tests are run 2012-12-16 11:28:17 +01:00
holger krekel
8d25e52e1e add sentry 2012-12-15 08:09:23 +01:00
Dusty Phillips
6fefab0e3a pocoo no longer has a pastebin service, so this section title is incorrect. 2012-12-11 12:04:12 -07:00
holger krekel
1e94d900f2 fixed versioning, thanks Arfrever 2012-12-09 09:19:33 +01:00
holger krekel
5f99511ab7 fix test after ronny's pytest-debug improvements 2012-12-04 20:31:37 +01:00
holger krekel
22dd5e29e2 when informations gets truncated, mention use of "-vv" to see it. 2012-11-30 12:18:12 +01:00
Ronny Pfannschmidt
725e63db66 improve PYTEST_DEBUG tracing output
by putingextra data on a new lines
with additional indent
2012-11-29 10:04:39 +01:00
holger krekel
3d79e7060e allow to specify prefixes starting with "_" when
customizing python_functions test discovery. (thanks Graham Horler)
2012-11-28 09:23:36 +01:00
Graham Horler
1d7c71884e Remove check for "_" prefix on python functions (use python_functions)
(See IRC hpk 2012-11-27 14:56: after the python_functions customization
 was introduced, it makes sense to disregard the preliminary "_" check)
2012-11-27 16:58:08 +00:00
Wieland Hoffmann
ffb5b8efa1 Fix a broken link to pytest-twisted 2012-11-22 19:59:15 +01:00
holger krekel
68786a6434 fix bug where using capsys with pytest.set_trace() in a test
function would break when looking at capsys.readouterr()
2012-11-21 20:43:31 +01:00
holger krekel
b97de57ebe improve docstring for metafunc.parametrize() 2012-11-21 10:13:44 +01:00
holger krekel
03445913e0 reanme README.txt to README.rst 2012-11-20 14:37:39 +01:00
holger krekel
8580058ffb move long description into README 2012-11-20 14:24:26 +01:00
holger krekel
1c9ef2443f bump version, fix -k option help 2012-11-20 14:20:39 +01:00
holger krekel
cac1a48fc7 Added tag 2.3.4 for changeset ef299e57f242 2012-11-20 14:09:40 +01:00
holger krekel
b5955c5979 fix version number, final fixes 2012-11-20 14:01:31 +01:00
holger krekel
765b053984 bump version, add announcement, regen docs 2012-11-20 13:42:00 +01:00
holger krekel
a9adfa9114 don't run long-args test on windows because it can't work 2012-11-20 11:52:06 +01:00
holger krekel
7f403950ad adapt changelog entry about autouse fixtures and yield 2012-11-19 22:20:37 +01:00
holger krekel
f263f54889 make yielded tests participate in the autouse protocol 2012-11-19 22:17:59 +01:00
holger krekel
d66ff7e63e fix autouse invocation (off-by-one error), relates to issue in moinmoin test suite 2012-11-19 22:17:55 +01:00
holger krekel
f3e03fc298 modernize tmpdir fixture (use request.node in tmpdir fixture, use @pytest.fixture) 2012-11-19 14:07:14 +01:00
holger krekel
2ef350aede getting rid of redundant "active" attribute 2012-11-19 12:42:10 +01:00
holger krekel
b940ed11a0 fix issue226 - LIFO ordering for fixture-added teardowns 2012-11-16 10:03:51 +01:00
holger krekel
e15da7cbef add a note about yield tests at least in the CHANGELOG 2012-11-14 10:02:47 +01:00
holger krekel
5b64b0130d fix typo (thanks Thomas Waldmann) 2012-11-14 09:40:01 +01:00
holger krekel
af89a9667f add example for accessing test result information from fixture 2012-11-14 09:39:21 +01:00
holger krekel
c64c567b75 fix issue224 - invocations with >256 char arguments now work 2012-11-12 10:15:43 +01:00
ENDOH takanao
d31f4dcba8 Fix typos in a document 2012-11-10 16:29:43 +09:00
holger krekel
d9ce7f143e switch to pushing docs to dev, amend markers example which needs the dev candidate 2012-11-09 12:40:48 +01:00
holger krekel
4ac465acfb allow to pass expressions to "-k" option, just like with the "-m" option 2012-11-09 12:29:33 +01:00
holger krekel
a4909a0ae4 allow to dynamically define markers (e.g. during pytest_collection_modifyitems) 2012-11-09 12:07:41 +01:00
holger krekel
c790490387 add an example for postprocessing a test failure 2012-11-08 23:36:16 +01:00
holger krekel
664b01ca42 fix misleading typo 2012-11-08 19:05:46 +01:00
holger krekel
ff0c75aa34 - add a Package/dir level setup example
- make tox.ini's doc/regen use pytest release instead of dev version
2012-11-07 11:11:40 +01:00
holger krekel
476d210d09 prolong workaround for jython AST bug http://bugs.jython.org/issue1497
to make pytest work for post-2.5.1 jython versions
2012-11-07 10:05:39 +01:00
holger krekel
eedc4242ef mention that jython-2.5.1 works 2012-11-07 09:35:49 +01:00
holger krekel
370f5dd5cb fix typo 2012-11-06 15:46:52 +01:00
holger krekel
79f45928a4 add release announce for 2.3.3 2012-11-06 15:41:51 +01:00
holger krekel
dbff4ae034 Added tag 2.3.3 for changeset 7fe44182c434 2012-11-06 15:38:49 +01:00
89 changed files with 2083 additions and 482 deletions

View File

@@ -25,3 +25,4 @@ env/
.tox
.cache
.coverage
.ropeproject

View File

@@ -52,3 +52,5 @@ ad9fe504a371ad8eb613052d58f229aa66f53527 2.2.4
c27a60097767c16a54ae56d9669a77925b213b9b 2.3.0
acf0e1477fb19a1d35a4e40242b77fa6af32eb17 2.3.1
8738b828dec53937765db71951ef955cca4c51f6 2.3.2
7fe44182c434f8ac89149a3c340479872a5d5ccb 2.3.3
ef299e57f24218dbdd949498d7e660723636bcc3 2.3.4

View File

@@ -23,3 +23,5 @@ Grig Gheorghiu
Bob Ippolito
Christian Tismer
Daniel Nuri
Graham Horler
Andreas Zeidler

View File

@@ -1,3 +1,86 @@
Changes between 2.3.4 and 2.3.5dev
-----------------------------------
- never consider a fixture function for test function collection
- allow re-running of test items / helps to fix pytest-reruntests plugin
and also help to keep less fixture/resource references alive
- put captured stdout/stderr into junitxml output even for passing tests
(thanks Adam Goucher)
- Issue 265 - integrate nose setup/teardown with setupstate
so it doesnt try to teardown if it did not setup
- issue 271 - dont write junitxml on slave nodes
- Issue 274 - dont try to show full doctest example
when doctest does not know the example location
- issue 280 - disable assertion rewriting on buggy CPython 2.6.0
- inject "getfixture()" helper to retrieve fixtures from doctests,
thanks Andreas Zeidler
- issue 259 - when assertion rewriting, be consistent with the default
source encoding of ASCII on Python 2
- issue 251 - report a skip instead of ignoring classes with init
- issue250 unicode/str mixes in parametrization names and values now works
- issue257, assertion-triggered compilation of source ending in a
comment line doesn't blow up in python2.5 (fixed through py>=1.4.13.dev6)
- fix --genscript option to generate standalone scripts that also
work with python3.3 (importer ordering)
- issue171 - in assertion rewriting, show the repr of some
global variables
- fix option help for "-k"
- move long description of distribution into README.rst
- improve docstring for metafunc.parametrize()
- fix bug where using capsys with pytest.set_trace() in a test
function would break when looking at capsys.readouterr()
- allow to specify prefixes starting with "_" when
customizing python_functions test discovery. (thanks Graham Horler)
- improve PYTEST_DEBUG tracing output by puting
extra data on a new lines with additional indent
- ensure OutcomeExceptions like skip/fail have initialized exception attributes
- issue 260 - don't use nose special setup on plain unittest cases
- fix issue134 - print the collect errors that prevent running specified test items
- fix issue266 - accept unicode in MarkEvaluator expressions
Changes between 2.3.3 and 2.3.4
-----------------------------------
- yielded test functions will now have autouse-fixtures active but
cannot accept fixtures as funcargs - it's anyway recommended to
rather use the post-2.0 parametrize features instead of yield, see:
http://pytest.org/latest/example/parametrize.html
- fix autouse-issue where autouse-fixtures would not be discovered
if defined in a a/conftest.py file and tests in a/tests/test_some.py
- fix issue226 - LIFO ordering for fixture teardowns
- fix issue224 - invocations with >256 char arguments now work
- fix issue91 - add/discuss package/directory level setups in example
- allow to dynamically define markers via
item.keywords[...]=assignment integrating with "-m" option
- make "-k" accept an expressions the same as with "-m" so that one
can write: -k "name1 or name2" etc. This is a slight incompatibility
if you used special syntax like "TestClass.test_method" which you now
need to write as -k "TestClass and test_method" to match a certain
method in a certain test class.
Changes between 2.3.2 and 2.3.3
-----------------------------------
@@ -81,7 +164,7 @@ Changes between 2.2.4 and 2.3.0
- fix issue202 - better automatic names for parametrized test functions
- fix issue139 - introduce @pytest.fixture which allows direct scoping
and parametrization of funcarg factories.
and parametrization of funcarg factories.
- fix issue198 - conftest fixtures were not found on windows32 in some
circumstances with nested directory structures due to path manipulation issues
- fix issue193 skip test functions with were parametrized with empty

View File

@@ -330,3 +330,75 @@ in one content string::
This could be run with at least three different ways to invoke pytest:
through the shell, through "python -m pytest" and inlined. As inlined
would be the fastest it could be run first (or "--fast" mode).
Create isolate plugin
---------------------
tags: feature
The idea is that you can e.g. import modules in a test and afterwards
sys.modules, sys.meta_path etc would be reverted. It can go further
then just importing however, e.g. current working direcroty, file
descriptors, ...
This would probably be done by marking::
@pytest.mark.isolate(importing=True, cwd=True, fds=False)
def test_foo():
...
With the possibility of doing this globally in an ini-file.
fnmatch for test names
----------------------
tags: feature-wish
various testsuites use suffixes instead of prefixes for test classes
also it lends itself to bdd style test names::
class UserBehaviour:
def anonymous_should_not_have_inbox(user):
...
def registred_should_have_inbox(user):
..
using the following in pytest.ini::
[pytest]
python_classes = Test *Behaviour *Test
python_functions = test *_should_*
mechanism for running named parts of tests with different reporting behaviour
------------------------------------------------------------------------------
tags: feature-wish-incomplete
a few use-cases come to mind:
* fail assertions and record that without stopping a complete test
* this is in particular hepfull if a small bit of a test is known to fail/xfail::
def test_fun():
with pytest.section('fdcheck, marks=pytest.mark.xfail_if(...)):
breaks_on_windows()
* divide functional/acceptance tests into sections
* provide a different mechanism for generators, maybe something like::
def pytest_runtest_call(item)
if not generator:
...
prepare_check = GeneratorCheckprepare()
gen = item.obj(**fixtures)
for check in gen
id, call = prepare_check(check)
# bubble should only prevent exception propagation after a failure
# the whole test should still fail
# there might be need for a loer level api and taking custom markers into account
with pytest.section(id, bubble=False):
call()

View File

@@ -1,5 +1,5 @@
include CHANGELOG
include README.txt
include README.rst
include setup.py
include distribute_setup.py
include tox.ini

36
README.rst Normal file
View File

@@ -0,0 +1,36 @@
The ``py.test`` testing tool makes it easy to write small tests, yet
scales to support complex functional testing. It provides
- `auto-discovery
<http://pytest.org/latest/goodpractises.html#python-test-discovery>`_
of test modules and functions,
- detailed info on failing `assert statements <http://pytest.org/latest/assert.html>`_ (no need to remember ``self.assert*`` names)
- `modular fixtures <http://pytest.org/latest/fixture.html>`_ for
managing small or parametrized long-lived test resources.
- multi-paradigm support: you can use ``py.test`` to run test suites based
on `unittest <http://pytest.org/latest/unittest.html>`_ (or trial),
`nose <http://pytest.org/latest/nose.html>`_
- single-source compatibility to Python2.4 all the way up to Python3.3,
PyPy-1.9 and Jython-2.5.1.
- many `external plugins <http://pytest.org/latest/plugins.html#installing-external-plugins-searching>`_.
A simple example for a test::
# content of test_module.py
def test_function():
i = 4
assert i == 3
which can be run with ``py.test test_module.py``. See `getting-started <http://pytest.org/latest/getting-started.html#our-first-test-run>`_ for more examples.
For much more info, including PDF docs, see
http://pytest.org
and report bugs at:
http://bitbucket.org/hpk42/pytest/issues/
Copyright Holger Krekel and others, 2004-2012

View File

@@ -1,4 +0,0 @@
py.test is a simple and popular testing tool for Python.
See http://pytest.org for more documentation.

View File

@@ -1,2 +1,2 @@
#
__version__ = '2.3.3'
__version__ = '2.3.5'

View File

@@ -39,7 +39,10 @@ def pytest_configure(config):
except ImportError:
mode = "reinterp"
else:
if sys.platform.startswith('java'):
# Both Jython and CPython 2.6.0 have AST bugs that make the
# assertion rewriting hook malfunction.
if (sys.platform.startswith('java') or
sys.version_info[:3] == (2, 6, 0)):
mode = "reinterp"
if mode != "plain":
_load_modules(mode)
@@ -78,7 +81,7 @@ def pytest_runtest_setup(item):
if new_expl:
# Don't include pageloads of data unless we are very verbose (-vv)
if len(''.join(new_expl[1:])) > 80*8 and item.config.option.verbose < 2:
new_expl[1:] = ['Detailed information too verbose, truncated']
new_expl[1:] = ['Detailed information truncated, use "-vv" to see']
res = '\n~'.join(new_expl)
if item.config.getvalue("assertmode") == "rewrite":
# The result will be fed back a python % formatting

View File

@@ -11,7 +11,7 @@ from _pytest.assertion import util
from _pytest.assertion.reinterpret import BuiltinAssertionError
if sys.platform.startswith("java") and sys.version_info < (2, 5, 2):
if sys.platform.startswith("java"):
# See http://bugs.jython.org/issue1497
_exprs = ("BoolOp", "BinOp", "UnaryOp", "Lambda", "IfExp", "Dict",
"ListComp", "GeneratorExp", "Yield", "Compare", "Call",

View File

@@ -6,6 +6,7 @@ import itertools
import imp
import marshal
import os
import re
import struct
import sys
import types
@@ -38,6 +39,7 @@ PYC_EXT = ".py" + (__debug__ and "c" or "o")
PYC_TAIL = "." + PYTEST_TAG + PYC_EXT
REWRITE_NEWLINES = sys.version_info[:2] != (2, 7) and sys.version_info < (3, 2)
ASCII_IS_DEFAULT_ENCODING = sys.version_info[0] < 3
class AssertionRewritingHook(object):
"""PEP302 Import hook which rewrites asserts."""
@@ -187,12 +189,43 @@ def _write_pyc(co, source_path, pyc):
RN = "\r\n".encode("utf-8")
N = "\n".encode("utf-8")
cookie_re = re.compile("coding[:=]\s*[-\w.]+")
BOM_UTF8 = '\xef\xbb\xbf'
def _rewrite_test(state, fn):
"""Try to read and rewrite *fn* and return the code object."""
try:
source = fn.read("rb")
except EnvironmentError:
return None
if ASCII_IS_DEFAULT_ENCODING:
# ASCII is the default encoding in Python 2. Without a coding
# declaration, Python 2 will complain about any bytes in the file
# outside the ASCII range. Sadly, this behavior does not extend to
# compile() or ast.parse(), which prefer to interpret the bytes as
# latin-1. (At least they properly handle explicit coding cookies.) To
# preserve this error behavior, we could force ast.parse() to use ASCII
# as the encoding by inserting a coding cookie. Unfortunately, that
# messes up line numbers. Thus, we have to check ourselves if anything
# is outside the ASCII range in the case no encoding is explicitly
# declared. For more context, see issue #269. Yay for Python 3 which
# gets this right.
end1 = source.find("\n")
end2 = source.find("\n", end1 + 1)
if (not source.startswith(BOM_UTF8) and
(not cookie_re.match(source[0:end1]) or
not cookie_re.match(source[end1:end2]))):
if hasattr(state, "_indecode"):
return None # encodings imported us again, we don't rewrite
state._indecode = True
try:
try:
source.decode("ascii")
except UnicodeDecodeError:
# Let it fail in real import.
return None
finally:
del state._indecode
# On Python versions which are not 2.7 and less than or equal to 3.1, the
# parser expects *nix newlines.
if REWRITE_NEWLINES:
@@ -262,6 +295,9 @@ def rewrite_asserts(mod):
_saferepr = py.io.saferepr
from _pytest.assertion.util import format_explanation as _format_explanation
def _should_repr_global_name(obj):
return not hasattr(obj, "__name__") and not py.builtin.callable(obj)
def _format_boolop(explanations, is_or):
return "(" + (is_or and " or " or " and ").join(explanations) + ")"
@@ -473,11 +509,12 @@ class AssertionRewriter(ast.NodeVisitor):
return self.statements
def visit_Name(self, name):
# Check if the name is local or not.
# Display the repr of the name if it's a local variable or
# _should_repr_global_name() thinks it's acceptable.
locs = ast.Call(self.builtin("locals"), [], [], None, None)
globs = ast.Call(self.builtin("globals"), [], [], None, None)
ops = [ast.In(), ast.IsNot()]
test = ast.Compare(ast.Str(name.id), ops, [locs, globs])
inlocs = ast.Compare(ast.Str(name.id), [ast.In()], [locs])
dorepr = self.helper("should_repr_global_name", name)
test = ast.BoolOp(ast.Or(), [inlocs, dorepr])
expr = ast.IfExp(test, self.display(name), ast.Str(name.id))
return name, self.explanation_param(expr)

View File

@@ -10,6 +10,7 @@ BuiltinAssertionError = py.builtin.builtins.AssertionError
# DebugInterpreter.
_reprcompare = None
def format_explanation(explanation):
"""This formats an explanation
@@ -83,9 +84,9 @@ except NameError:
basestring = str
def assertrepr_compare(op, left, right):
"""return specialised explanations for some operators/operands"""
width = 80 - 15 - len(op) - 2 # 15 chars indentation, 1 space around op
def assertrepr_compare(config, op, left, right):
"""Return specialised explanations for some operators/operands"""
width = 80 - 15 - len(op) - 2 # 15 chars indentation, 1 space around op
left_repr = py.io.saferepr(left, maxsize=int(width/2))
right_repr = py.io.saferepr(right, maxsize=width-len(left_repr))
summary = '%s %s %s' % (left_repr, op, right_repr)
@@ -93,72 +94,72 @@ def assertrepr_compare(op, left, right):
issequence = lambda x: isinstance(x, (list, tuple))
istext = lambda x: isinstance(x, basestring)
isdict = lambda x: isinstance(x, dict)
isset = lambda x: isinstance(x, set)
isset = lambda x: isinstance(x, (set, frozenset))
verbose = config.getoption('verbose')
explanation = None
try:
if op == '==':
if istext(left) and istext(right):
explanation = _diff_text(left, right)
explanation = _diff_text(left, right, verbose)
elif issequence(left) and issequence(right):
explanation = _compare_eq_sequence(left, right)
explanation = _compare_eq_sequence(left, right, verbose)
elif isset(left) and isset(right):
explanation = _compare_eq_set(left, right)
explanation = _compare_eq_set(left, right, verbose)
elif isdict(left) and isdict(right):
explanation = _diff_text(py.std.pprint.pformat(left),
py.std.pprint.pformat(right))
explanation = _compare_eq_dict(left, right, verbose)
elif op == 'not in':
if istext(left) and istext(right):
explanation = _notin_text(left, right)
explanation = _notin_text(left, right, verbose)
except py.builtin._sysex:
raise
except:
excinfo = py.code.ExceptionInfo()
explanation = ['(pytest_assertion plugin: representation of '
'details failed. Probably an object has a faulty __repr__.)',
str(excinfo)]
explanation = [
'(pytest_assertion plugin: representation of details failed. '
'Probably an object has a faulty __repr__.)', str(excinfo)]
if not explanation:
return None
return [summary] + explanation
def _diff_text(left, right):
def _diff_text(left, right, verbose=False):
"""Return the explanation for the diff between text
This will skip leading and trailing characters which are
identical to keep the diff minimal.
Unless --verbose is used this will skip leading and trailing
characters which are identical to keep the diff minimal.
"""
explanation = []
i = 0 # just in case left or right has zero length
for i in range(min(len(left), len(right))):
if left[i] != right[i]:
break
if i > 42:
i -= 10 # Provide some context
explanation = ['Skipping %s identical '
'leading characters in diff' % i]
left = left[i:]
right = right[i:]
if len(left) == len(right):
for i in range(len(left)):
if left[-i] != right[-i]:
if not verbose:
i = 0 # just in case left or right has zero length
for i in range(min(len(left), len(right))):
if left[i] != right[i]:
break
if i > 42:
i -= 10 # Provide some context
explanation += ['Skipping %s identical '
'trailing characters in diff' % i]
left = left[:-i]
right = right[:-i]
i -= 10 # Provide some context
explanation = ['Skipping %s identical leading '
'characters in diff, use -v to show' % i]
left = left[i:]
right = right[i:]
if len(left) == len(right):
for i in range(len(left)):
if left[-i] != right[-i]:
break
if i > 42:
i -= 10 # Provide some context
explanation += ['Skipping %s identical trailing '
'characters in diff, use -v to show' % i]
left = left[:-i]
right = right[:-i]
explanation += [line.strip('\n')
for line in py.std.difflib.ndiff(left.splitlines(),
right.splitlines())]
return explanation
def _compare_eq_sequence(left, right):
def _compare_eq_sequence(left, right, verbose=False):
explanation = []
for i in range(min(len(left), len(right))):
if left[i] != right[i]:
@@ -166,16 +167,18 @@ def _compare_eq_sequence(left, right):
(i, left[i], right[i])]
break
if len(left) > len(right):
explanation += ['Left contains more items, '
'first extra item: %s' % py.io.saferepr(left[len(right)],)]
explanation += [
'Left contains more items, first extra item: %s' %
py.io.saferepr(left[len(right)],)]
elif len(left) < len(right):
explanation += ['Right contains more items, '
'first extra item: %s' % py.io.saferepr(right[len(left)],)]
return explanation # + _diff_text(py.std.pprint.pformat(left),
# py.std.pprint.pformat(right))
explanation += [
'Right contains more items, first extra item: %s' %
py.io.saferepr(right[len(left)],)]
return explanation # + _diff_text(py.std.pprint.pformat(left),
# py.std.pprint.pformat(right))
def _compare_eq_set(left, right):
def _compare_eq_set(left, right, verbose=False):
explanation = []
diff_left = left - right
diff_right = right - left
@@ -190,12 +193,41 @@ def _compare_eq_set(left, right):
return explanation
def _notin_text(term, text):
def _compare_eq_dict(left, right, verbose=False):
explanation = []
common = set(left).intersection(set(right))
same = dict((k, left[k]) for k in common if left[k] == right[k])
if same and not verbose:
explanation += ['Hiding %s identical items, use -v to show' %
len(same)]
elif same:
explanation += ['Common items:']
explanation += py.std.pprint.pformat(same).splitlines()
diff = set(k for k in common if left[k] != right[k])
if diff:
explanation += ['Differing items:']
for k in diff:
explanation += [py.io.saferepr({k: left[k]}) + ' != ' +
py.io.saferepr({k: right[k]})]
extra_left = set(left) - set(right)
if extra_left:
explanation.append('Left contains more items:')
explanation.extend(py.std.pprint.pformat(
dict((k, left[k]) for k in extra_left)).splitlines())
extra_right = set(right) - set(left)
if extra_right:
explanation.append('Right contains more items:')
explanation.extend(py.std.pprint.pformat(
dict((k, right[k]) for k in extra_right)).splitlines())
return explanation
def _notin_text(term, text, verbose=False):
index = text.find(term)
head = text[:index]
tail = text[index+len(term):]
correct_text = head + tail
diff = _diff_text(correct_text, text)
diff = _diff_text(correct_text, text, verbose)
newdiff = ['%s is contained here:' % py.io.saferepr(term, maxsize=42)]
for line in diff:
if line.startswith('Skipping'):

View File

@@ -173,8 +173,7 @@ class CaptureManager:
if funcarg_outerr is not None:
outerr = (outerr[0] + funcarg_outerr[0],
outerr[1] + funcarg_outerr[1])
if not rep.passed:
addouterr(rep, outerr)
addouterr(rep, outerr)
if not rep.passed or rep.when == "teardown":
outerr = ('', '')
item.outerr = outerr
@@ -211,12 +210,15 @@ class CaptureFixture:
def _finalize(self):
if hasattr(self, 'capture'):
outerr = self.capture.reset()
outerr = self._outerr = self.capture.reset()
del self.capture
return outerr
def readouterr(self):
return self.capture.readouterr()
try:
return self.capture.readouterr()
except AttributeError:
return self._outerr
def close(self):
self._finalize()

View File

@@ -181,7 +181,7 @@ class Conftest(object):
if hasattr(arg, 'startswith') and arg.startswith("--"):
continue
anchor = current.join(arg, abs=1)
if anchor.check(): # we found some file object
if exists(anchor): # we found some file object
self._try_load_conftest(anchor)
foundanchor = True
if not foundanchor:
@@ -479,6 +479,11 @@ class Config(object):
except KeyError:
py.test.skip("no %r value found" %(name,))
def exists(path, ignore=EnvironmentError):
try:
return path.check()
except ignore:
return False
def getcfg(args, inibasenames):
args = [x for x in args if not str(x).startswith("-")]
@@ -489,7 +494,7 @@ def getcfg(args, inibasenames):
for base in arg.parts(reverse=True):
for inibasename in inibasenames:
p = base.join(inibasename)
if p.check():
if exists(p):
iniconfig = py.iniconfig.IniConfig(p)
if 'pytest' in iniconfig.sections:
return iniconfig['pytest']

View File

@@ -24,12 +24,28 @@ class TagTracer:
def get(self, name):
return TagTracerSub(self, (name,))
def format_message(self, tags, args):
if isinstance(args[-1], dict):
extra = args[-1]
args = args[:-1]
else:
extra = {}
content = " ".join(map(str, args))
indent = " " * self.indent
lines = [
"%s%s [%s]\n" %(indent, content, ":".join(tags))
]
for name, value in extra.items():
lines.append("%s %s: %s\n" % (indent, name, value))
return lines
def processmessage(self, tags, args):
if self.writer is not None:
if args:
indent = " " * self.indent
content = " ".join(map(str, args))
self.writer("%s%s [%s]\n" %(indent, content, ":".join(tags)))
if self.writer is not None and args:
lines = self.format_message(tags, args)
self.writer(''.join(lines))
try:
self._tag2proc[tags](tags, args)
except KeyError:
@@ -329,7 +345,10 @@ def importplugin(importspec):
#if str(e).find(name) == -1:
# raise
pass #
__import__(importspec)
try:
__import__(importspec)
except ImportError:
raise ImportError(importspec)
return sys.modules[importspec]
class MultiCall:

View File

@@ -1,6 +1,7 @@
""" discover and run doctests in modules and test files."""
import pytest, py
from _pytest.python import FixtureRequest, FuncFixtureInfo
from py._code.code import TerminalRepr, ReprFileLocation
def pytest_addoption(parser):
@@ -41,17 +42,27 @@ class DoctestItem(pytest.Item):
example = doctestfailure.example
test = doctestfailure.test
filename = test.filename
lineno = test.lineno + example.lineno + 1
if test.lineno is None:
lineno = None
else:
lineno = test.lineno + example.lineno + 1
message = excinfo.type.__name__
reprlocation = ReprFileLocation(filename, lineno, message)
checker = py.std.doctest.OutputChecker()
REPORT_UDIFF = py.std.doctest.REPORT_UDIFF
filelines = py.path.local(filename).readlines(cr=0)
i = max(test.lineno, max(0, lineno - 10)) # XXX?
lines = []
for line in filelines[i:lineno]:
lines.append("%03d %s" % (i+1, line))
i += 1
if lineno is not None:
i = max(test.lineno, max(0, lineno - 10)) # XXX?
for line in filelines[i:lineno]:
lines.append("%03d %s" % (i+1, line))
i += 1
else:
lines.append('EXAMPLE LOCATION UNKNOWN, not showing all tests of that example')
indent = '>>>'
for line in example.source.splitlines():
lines.append('??? %s %s' % (indent, line))
indent = '...'
if excinfo.errisinstance(doctest.DocTestFailure):
lines += checker.output_difference(example,
doctestfailure.got, REPORT_UDIFF).split("\n")
@@ -70,9 +81,14 @@ class DoctestItem(pytest.Item):
class DoctestTextfile(DoctestItem, pytest.File):
def runtest(self):
doctest = py.std.doctest
# satisfy `FixtureRequest` constructor...
self.funcargs = {}
self._fixtureinfo = FuncFixtureInfo((), [], {})
fixture_request = FixtureRequest(self)
failed, tot = doctest.testfile(
str(self.fspath), module_relative=False,
optionflags=doctest.ELLIPSIS,
extraglobs=dict(getfixture=fixture_request.getfuncargvalue),
raise_on_error=True, verbose=0)
class DoctestModule(DoctestItem, pytest.File):
@@ -82,6 +98,11 @@ class DoctestModule(DoctestItem, pytest.File):
module = self.config._conftest.importconftest(self.fspath)
else:
module = self.fspath.pyimport()
# satisfy `FixtureRequest` constructor...
self.funcargs = {}
self._fixtureinfo = FuncFixtureInfo((), [], {})
fixture_request = FixtureRequest(self)
failed, tot = doctest.testmod(
module, raise_on_error=True, verbose=0,
extraglobs=dict(getfixture=fixture_request.getfuncargvalue),
optionflags=doctest.ELLIPSIS)

View File

@@ -70,7 +70,8 @@ def pytest_addoption(parser):
def pytest_configure(config):
xmlpath = config.option.xmlpath
if xmlpath:
# prevent opening xmllog on slave nodes (xdist)
if xmlpath and not hasattr(config, 'slaveinput'):
config._xml = LogXML(xmlpath, config.option.junitprefix)
config.pluginmanager.register(config._xml)
@@ -106,11 +107,20 @@ class LogXML(object):
time=getattr(report, 'duration', 0)
))
def _write_captured_output(self, report):
sec = dict(report.sections)
for name in ('out', 'err'):
content = sec.get("Captured std%s" % name)
if content:
tag = getattr(Junit, 'system-'+name)
self.append(tag(bin_xml_escape(content)))
def append(self, obj):
self.tests[-1].append(obj)
def append_pass(self, report):
self.passed += 1
self._write_captured_output(report)
def append_failure(self, report):
#msg = str(report.longrepr.reprtraceback.extraline)
@@ -119,16 +129,11 @@ class LogXML(object):
Junit.skipped(message="xfail-marked test passes unexpectedly"))
self.skipped += 1
else:
sec = dict(report.sections)
fail = Junit.failure(message="test failure")
fail.append(str(report.longrepr))
self.append(fail)
for name in ('out', 'err'):
content = sec.get("Captured std%s" % name)
if content:
tag = getattr(Junit, 'system-'+name)
self.append(tag(bin_xml_escape(content)))
self.failed += 1
self._write_captured_output(report)
def append_collect_failure(self, report):
#msg = str(report.longrepr.reprtraceback.extraline)
@@ -161,6 +166,7 @@ class LogXML(object):
message=skipreason
))
self.skipped += 1
self._write_captured_output(report)
def pytest_runtest_logreport(self, report):
if report.passed:

View File

@@ -93,12 +93,14 @@ def wrap_session(config, doit):
session.exitstatus = EXIT_INTERNALERROR
if excinfo.errisinstance(SystemExit):
sys.stderr.write("mainloop: caught Spurious SystemExit!\n")
else:
if session._testsfailed:
session.exitstatus = EXIT_TESTSFAILED
finally:
if initstate >= 2:
config.hook.pytest_sessionfinish(session=session,
exitstatus=session.exitstatus or (session._testsfailed and 1))
if not session.exitstatus and session._testsfailed:
session.exitstatus = EXIT_TESTSFAILED
config.hook.pytest_sessionfinish(
session=session,
exitstatus=session.exitstatus)
if initstate >= 1:
config.pluginmanager.do_unconfigure(config)
return session.exitstatus

View File

@@ -7,12 +7,13 @@ def pytest_namespace():
def pytest_addoption(parser):
group = parser.getgroup("general")
group._addoption('-k',
action="store", dest="keyword", default='', metavar="KEYWORDEXPR",
help="only run tests which match given keyword expression. "
"An expression consists of space-separated terms. "
"Each term must match. Precede a term with '-' to negate. "
"Terminate expression with ':' to make the first match match "
"all subsequent tests (usually file-order). ")
action="store", dest="keyword", default='', metavar="EXPRESSION",
help="only run tests which match the given substring expression. "
"An expression is a python evaluatable expression "
"where all names are substring-matched against test names "
"and keywords. Example: -k 'test_method or test_other' "
"matches all test functions whose name contains "
"'test_method' or 'test_other'.")
group._addoption("-m",
action="store", dest="markexpr", default="", metavar="MARKEXPR",
@@ -51,7 +52,7 @@ def pytest_collection_modifyitems(items, config):
remaining = []
deselected = []
for colitem in items:
if keywordexpr and skipbykeyword(colitem, keywordexpr):
if keywordexpr and not matchkeyword(colitem, keywordexpr):
deselected.append(colitem)
else:
if selectuntil:
@@ -72,37 +73,26 @@ class BoolDict:
def __getitem__(self, name):
return name in self._mydict
class SubstringDict:
def __init__(self, mydict):
self._mydict = mydict
def __getitem__(self, name):
for key in self._mydict:
if name in key:
return True
return False
def matchmark(colitem, matchexpr):
return eval(matchexpr, {}, BoolDict(colitem.obj.__dict__))
return eval(matchexpr, {}, BoolDict(colitem.keywords))
def matchkeyword(colitem, keywordexpr):
keywordexpr = keywordexpr.replace("-", "not ")
return eval(keywordexpr, {}, SubstringDict(colitem.keywords))
def pytest_configure(config):
if config.option.strict:
pytest.mark._config = config
def skipbykeyword(colitem, keywordexpr):
""" return True if they given keyword expression means to
skip this collector/item.
"""
if not keywordexpr:
return
itemkeywords = colitem.keywords
for key in filter(None, keywordexpr.split()):
eor = key[:1] == '-'
if eor:
key = key[1:]
if not (eor ^ matchonekeyword(key, itemkeywords)):
return True
def matchonekeyword(key, itemkeywords):
for elem in key.split("."):
for kw in itemkeywords:
if elem in kw:
break
else:
return False
return True
class MarkGenerator:
""" Factory for :class:`MarkDecorator` objects - exposed as
a ``py.test.mark`` singleton instance. Example::

View File

@@ -3,6 +3,8 @@
import pytest, py
import inspect
import sys
from _pytest import unittest
def pytest_runtest_makereport(__multicall__, item, call):
SkipTest = getattr(sys.modules.get('nose', None), 'SkipTest', None)
@@ -15,7 +17,7 @@ def pytest_runtest_makereport(__multicall__, item, call):
@pytest.mark.trylast
def pytest_runtest_setup(item):
if isinstance(item, (pytest.Function)):
if is_potential_nosetest(item):
if isinstance(item.parent, pytest.Generator):
gen = item.parent
if not hasattr(gen, '_nosegensetup'):
@@ -26,9 +28,11 @@ def pytest_runtest_setup(item):
if not call_optional(item.obj, 'setup'):
# call module level setup if there is no object level one
call_optional(item.parent.obj, 'setup')
#XXX this implies we only call teardown when setup worked
item.session._setupstate.addfinalizer((lambda: teardown_nose(item)), item)
def pytest_runtest_teardown(item):
if isinstance(item, pytest.Function):
def teardown_nose(item):
if is_potential_nosetest(item):
if not call_optional(item.obj, 'teardown'):
call_optional(item.parent.obj, 'teardown')
#if hasattr(item.parent, '_nosegensetup'):
@@ -39,9 +43,18 @@ def pytest_make_collect_report(collector):
if isinstance(collector, pytest.Generator):
call_optional(collector.obj, 'setup')
def is_potential_nosetest(item):
# extra check needed since we do not do nose style setup/teardown
# on direct unittest style classes
return isinstance(item, pytest.Function) and \
not isinstance(item, unittest.TestCaseFunction)
def call_optional(obj, name):
method = getattr(obj, name, None)
if method is not None and not hasattr(method, "_pytestfixturefunction") and py.builtin.callable(method):
isfixture = hasattr(method, "_pytestfixturefunction")
if method is not None and not isfixture and py.builtin.callable(method):
# If there's any problems allow the exception to raise rather than
# silently ignoring them
method()

View File

@@ -61,24 +61,43 @@ class PdbInvoke:
return rep
if hasattr(rep, "wasxfail"):
return rep
# we assume that the above execute() suspended capturing
# XXX we re-use the TerminalReporter's terminalwriter
# because this seems to avoid some encoding related troubles
# for not completely clear reasons.
tw = item.config.pluginmanager.getplugin("terminalreporter")._tw
tw.line()
tw.sep(">", "traceback")
rep.toterminal(tw)
tw.sep(">", "entering PDB")
# A doctest.UnexpectedException is not useful for post_mortem.
# Use the underlying exception instead:
if isinstance(call.excinfo.value, py.std.doctest.UnexpectedException):
tb = call.excinfo.value.exc_info[2]
else:
tb = call.excinfo._excinfo[2]
post_mortem(tb)
rep._pdbshown = True
return rep
return _enter_pdb(item, call.excinfo, rep)
def _enter_pdb(item, excinfo, rep):
# we assume that the above execute() suspended capturing
# XXX we re-use the TerminalReporter's terminalwriter
# because this seems to avoid some encoding related troubles
# for not completely clear reasons.
tw = item.config.pluginmanager.getplugin("terminalreporter")._tw
tw.line()
tw.sep(">", "traceback")
rep.toterminal(tw)
tw.sep(">", "entering PDB")
tb = _postmortem_traceback(excinfo)
post_mortem(tb)
rep._pdbshown = True
return rep
def _postmortem_traceback(excinfo):
# A doctest.UnexpectedException is not useful for post_mortem.
# Use the underlying exception instead:
if isinstance(excinfo.value, py.std.doctest.UnexpectedException):
return excinfo.value.exc_info[2]
else:
return excinfo._excinfo[2]
def _find_last_non_hidden_frame(stack):
i = max(0, len(stack) - 1)
while i and stack[i][0].f_locals.get("__tracebackhide__", False):
i -= 1
return i
def post_mortem(t):
pdb = py.std.pdb
@@ -86,9 +105,7 @@ def post_mortem(t):
def get_stack(self, f, t):
stack, i = pdb.Pdb.get_stack(self, f, t)
if f is None:
i = max(0, len(stack) - 1)
while i and stack[i][0].f_locals.get("__tracebackhide__", False):
i-=1
i = _find_last_non_hidden_frame(stack)
return stack, i
p = Pdb()
p.reset()

View File

@@ -195,10 +195,7 @@ class TmpTestdir:
except py.error.EEXIST:
continue
break
# we need to create another subdir
# because Directory.collect() currently loads
# conftest.py from sibling directories
self.tmpdir = tmpdir.mkdir(name)
self.tmpdir = tmpdir
self.plugins = []
self._syspathremove = []
self.chdir() # always chdir
@@ -422,7 +419,8 @@ class TmpTestdir:
str(os.getcwd()), env.get('PYTHONPATH', '')]))
kw['env'] = env
#print "env", env
return py.std.subprocess.Popen(cmdargs, stdout=stdout, stderr=stderr, **kw)
return py.std.subprocess.Popen(cmdargs,
stdout=stdout, stderr=stderr, **kw)
def pytestmain(self, *args, **kwargs):
class ResetCapturing:

View File

@@ -175,10 +175,10 @@ def pytest_pycollect_makeitem(__multicall__, collector, name, obj):
#if hasattr(collector.obj, 'unittest'):
# return # we assume it's a mixin class for a TestCase derived one
if collector.classnamefilter(name):
if not hasinit(obj):
Class = collector._getcustomclass("Class")
return Class(name, parent=collector)
elif collector.funcnamefilter(name) and hasattr(obj, '__call__'):
Class = collector._getcustomclass("Class")
return Class(name, parent=collector)
elif collector.funcnamefilter(name) and hasattr(obj, '__call__') and \
getfixturemarker(obj) is None:
if is_generator(obj):
return Generator(name, parent=collector)
else:
@@ -277,13 +277,12 @@ class PyCollector(PyobjMixin, pytest.Collector):
if name in seen:
continue
seen[name] = True
if name[0] != "_":
res = self.makeitem(name, obj)
if res is None:
continue
if not isinstance(res, list):
res = [res]
l.extend(res)
res = self.makeitem(name, obj)
if res is None:
continue
if not isinstance(res, list):
res = [res]
l.extend(res)
l.sort(key=lambda item: item.reportinfo()[:2])
return l
@@ -395,6 +394,11 @@ class Module(pytest.File, PyCollector):
class Class(PyCollector):
""" Collector for test methods. """
def collect(self):
if hasinit(self.obj):
pytest.skip("class %s.%s with __init__ won't get collected" % (
self.obj.__module__,
self.obj.__name__,
))
return [self._getcustomclass("Instance")(name="()", parent=self)]
def setup(self):
@@ -528,7 +532,7 @@ def hasinit(obj):
def fillfixtures(function):
""" fill missing funcargs for a test function. """
if getattr(function, "_args", None) is None: # not a yielded function
if 1 or getattr(function, "_args", None) is None: # not a yielded function
try:
request = function._request
except AttributeError:
@@ -643,8 +647,11 @@ class Metafunc(FuncargnamesCompatAttr):
:arg argnames: an argument name or a list of argument names
:arg argvalues: a list of values for the argname or a list of tuples of
values for the list of argument names.
:arg argvalues: The list of argvalues determines how often a test is invoked
with different argument values. If only one argname was specified argvalues
is a list of simple values. If N argnames were specified, argvalues must
be a list of N-tuples, where each tuple-element specifies a value for its
respective argname.
:arg indirect: if True each argvalue corresponding to an argname will
be passed as request.param to its respective argname fixture
@@ -730,7 +737,7 @@ def idmaker(argnames, argvalues):
this_id = []
for nameindex, val in enumerate(valset):
if not isinstance(val, (float, int, str)):
this_id.append(argnames[nameindex]+str(valindex))
this_id.append(str(argnames[nameindex])+str(valindex))
else:
this_id.append(str(val))
idlist.append("-".join(this_id))
@@ -906,23 +913,30 @@ class Function(FunctionMixin, pytest.Item, FuncargnamesCompatAttr):
self.keywords[name] = val
fm = self.session._fixturemanager
self._fixtureinfo = fi = fm.getfixtureinfo(self.parent,
self.obj, self.cls)
isyield = self._isyieldedfunction()
self._fixtureinfo = fi = fm.getfixtureinfo(self.parent, self.obj,
self.cls,
funcargs=not isyield)
self.fixturenames = fi.names_closure
if callspec is not None:
self.callspec = callspec
self._initrequest()
def _initrequest(self):
if self._isyieldedfunction():
assert not callspec, (
assert not hasattr(self, "callspec"), (
"yielded functions (deprecated) cannot have funcargs")
self.funcargs = {}
else:
if callspec is not None:
self.callspec = callspec
self.funcargs = callspec.funcargs or {}
if hasattr(self, "callspec"):
callspec = self.callspec
self.funcargs = callspec.funcargs.copy()
self._genid = callspec.id
if hasattr(callspec, "param"):
self.param = callspec.param
else:
self.funcargs = {}
self._request = req = FixtureRequest(self)
#req._discoverfactories()
self._request = FixtureRequest(self)
@property
def function(self):
@@ -1197,8 +1211,10 @@ class FixtureRequest(FuncargnamesCompatAttr):
self._fixturestack.pop()
def _getfuncargvalue(self, fixturedef):
if fixturedef.active:
return fixturedef.cached_result
try:
return fixturedef.cached_result # set by fixturedef.execute()
except AttributeError:
pass
# prepare request fixturename and param attributes before
# calling into fixture function
@@ -1224,6 +1240,7 @@ class FixtureRequest(FuncargnamesCompatAttr):
if paramscopenum != scopenum_subfunction:
scope = scopes[paramscopenum]
# check if a higher-level scoped fixture accesses a lower level one
if scope is not None:
__tracebackhide__ = True
if scopemismatch(self.scope, scope):
@@ -1236,15 +1253,18 @@ class FixtureRequest(FuncargnamesCompatAttr):
__tracebackhide__ = False
mp.setattr(self, "scope", scope)
# route request.addfinalizer to fixturedef
mp.setattr(self, "addfinalizer", fixturedef.addfinalizer)
# perform the fixture call
val = fixturedef.execute(request=self)
# prepare finalization according to scope
# (XXX analyse exact finalizing mechanics / cleanup)
self.session._setupstate.addfinalizer(fixturedef.finish, self.node)
self._fixturemanager.addargfinalizer(fixturedef.finish, argname)
for subargname in fixturedef.argnames: # XXX all deps?
self._fixturemanager.addargfinalizer(fixturedef.finish, subargname)
mp.setattr(self, "addfinalizer", fixturedef.addfinalizer)
# finally perform the fixture call
val = fixturedef.execute(request=self)
mp.undo()
return val
@@ -1392,13 +1412,13 @@ class FixtureManager:
self._nodename2fixtureinfo = {}
def getfixtureinfo(self, node, func, cls):
def getfixtureinfo(self, node, func, cls, funcargs=True):
key = (node, func.__name__)
try:
return self._nodename2fixtureinfo[key]
except KeyError:
pass
if not hasattr(node, "nofuncargs"):
if funcargs and not hasattr(node, "nofuncargs"):
if cls is not None:
startindex = 1
else:
@@ -1449,7 +1469,7 @@ class FixtureManager:
for baseid, basenames in self._nodeid_and_autousenames:
if nodeid.startswith(baseid):
if baseid:
i = len(baseid) + 1
i = len(baseid)
nextchar = nodeid[i:i+1]
if nextchar and nextchar not in ":/":
continue
@@ -1503,6 +1523,8 @@ class FixtureManager:
items[:] = parametrize_sorted(items, set(), {}, 0)
def pytest_runtest_teardown(self, item, nextitem):
# XXX teardown needs to be normalized for parametrized and
# no-parametrized functions
try:
cs1 = item.callspec
except AttributeError:
@@ -1524,7 +1546,7 @@ class FixtureManager:
keylist.sort()
for (scopenum, name, param) in keylist:
item.session._setupstate._callfinalizers((name, param))
l = self._arg2finish.get(name)
l = self._arg2finish.pop(name, None)
if l is not None:
for fin in reversed(l):
fin()
@@ -1545,15 +1567,7 @@ class FixtureManager:
continue
# fixture functions have a pytest_funcarg__ prefix (pre-2.3 style)
# or are "@pytest.fixture" marked
try:
marker = obj._pytestfixturefunction
except KeyboardInterrupt:
raise
except Exception:
# some objects raise errors like request (from flask import request)
# we don't expect them to be fixture functions
marker = None
marker = getfixturemarker(obj)
if marker is None:
if not name.startswith(self._argprefix):
continue
@@ -1601,9 +1615,9 @@ class FixtureManager:
class FixtureDef:
""" A container for a factory definition. """
def __init__(self, fixturenanager, baseid, argname, func, scope, params,
def __init__(self, fixturemanager, baseid, argname, func, scope, params,
unittest=False):
self._fixturemanager = fixturenanager
self._fixturemanager = fixturemanager
self.baseid = baseid
self.func = func
self.argname = argname
@@ -1613,7 +1627,6 @@ class FixtureDef:
startindex = unittest and 1 or None
self.argnames = getfuncargnames(func, startindex=startindex)
self.unittest = unittest
self.active = False
self._finalizer = []
def addfinalizer(self, finalizer):
@@ -1625,9 +1638,11 @@ class FixtureDef:
func()
# check neccesity of next commented call
self._fixturemanager.removefinalizer(self.finish)
self.active = False
#print "finished", self
#del self.cached_result
try:
del self.cached_result
except AttributeError:
pass
def execute(self, request):
kwargs = {}
@@ -1649,7 +1664,7 @@ class FixtureDef:
except AttributeError:
pass
result = fixturefunc(**kwargs)
self.active = True
assert not hasattr(self, "cached_result")
self.cached_result = result
return result
@@ -1749,6 +1764,18 @@ def getfuncargparams(item, ignore, scopenum, cache):
def xunitsetup(obj, name):
meth = getattr(obj, name, None)
if meth is not None:
if not hasattr(meth, "_pytestfixturefunction"):
return meth
if getfixturemarker(meth) is None:
return meth
def getfixturemarker(obj):
""" return fixturemarker or None if it doesn't exist or raised
exceptions."""
try:
return getattr(obj, "_pytestfixturefunction", None)
except KeyboardInterrupt:
raise
except Exception:
# some objects raise errors like request (from flask import request)
# we don't expect them to be fixture functions
return None

View File

@@ -63,12 +63,20 @@ def pytest_runtest_protocol(item, nextitem):
return True
def runtestprotocol(item, log=True, nextitem=None):
hasrequest = hasattr(item, "_request")
if hasrequest and not item._request:
item._initrequest()
rep = call_and_report(item, "setup", log)
reports = [rep]
if rep.passed:
reports.append(call_and_report(item, "call", log))
reports.append(call_and_report(item, "teardown", log,
nextitem=nextitem))
# after all teardown hooks have been called
# want funcargs and request info to go away
if hasrequest:
item._request = False
item.funcargs = None
return reports
def pytest_runtest_setup(item):
@@ -364,6 +372,7 @@ class OutcomeException(Exception):
contain info about test and collection outcomes.
"""
def __init__(self, msg=None, pytrace=True):
Exception.__init__(self, msg)
self.msg = msg
self.pytrace = pytrace

View File

@@ -86,7 +86,7 @@ class MarkEvaluator:
self.result = False
for expr in self.holder.args:
self.expr = expr
if isinstance(expr, str):
if isinstance(expr, py.builtin._basestring):
result = cached_eval(self.item.config, expr, d)
else:
pytest.fail("expression is not a string")

View File

@@ -57,7 +57,7 @@ if __name__ == "__main__":
sources = pickle.loads(zlib.decompress(base64.decodestring(sources)))
importer = DictImporter(sources)
sys.meta_path.append(importer)
sys.meta_path.insert(0, importer)
entry = "@ENTRY@"
do_exec(entry, locals())

View File

@@ -331,7 +331,7 @@ class TerminalReporter:
def pytest_sessionfinish(self, exitstatus, __multicall__):
__multicall__.execute()
self._tw.line("")
if exitstatus in (0, 1, 2):
if exitstatus in (0, 1, 2, 4):
self.summary_errors()
self.summary_failures()
self.config.hook.pytest_terminal_summary(terminalreporter=self)

View File

@@ -54,15 +54,15 @@ def pytest_configure(config):
mp.setattr(config, '_tmpdirhandler', t, raising=False)
mp.setattr(pytest, 'ensuretemp', t.ensuretemp, raising=False)
def pytest_funcarg__tmpdir(request):
@pytest.fixture
def tmpdir(request):
"""return a temporary directory path object
which is unique to each test function invocation,
created as a sub directory of the base temporary
directory. The returned object is a `py.path.local`_
path object.
"""
name = request._pyfuncitem.name
name = request.node.name
name = py.std.re.sub("[\W]", "_", name)
x = request.config._tmpdirhandler.mktemp(name, numbered=True)
return x

View File

@@ -31,6 +31,8 @@ help:
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " text to make text files"
@echo " man to make manual pages"
@@ -142,3 +144,18 @@ doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
texinfo:
mkdir -p $(BUILDDIR)/texinfo
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
mkdir -p $(BUILDDIR)/texinfo
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."

View File

@@ -5,6 +5,7 @@ Release announcements
.. toctree::
:maxdepth: 2
release-2.3.4
release-2.3.3
release-2.3.2
release-2.3.1

View File

@@ -0,0 +1,62 @@
pytest-2.3.3: integration fixes, py24 suport, ``*/**`` shown in traceback
===========================================================================
pytest-2.3.3 is a another stabilization release of the py.test tool
which offers uebersimple assertions, scalable fixture mechanisms
and deep customization for testing with Python. Particularly,
this release provides:
- integration fixes and improvements related to flask, numpy, nose,
unittest, mock
- makes pytest work on py24 again (yes, people sometimes still need to use it)
- show ``*,**`` args in pytest tracebacks
Thanks to Manuel Jacob, Thomas Waldmann, Ronny Pfannschmidt, Pavel Repin
and Andreas Taumoefolau for providing patches and all for the issues.
See
http://pytest.org/
for general information. To install or upgrade pytest:
pip install -U pytest # or
easy_install -U pytest
best,
holger krekel
Changes between 2.3.2 and 2.3.3
-----------------------------------
- fix issue214 - parse modules that contain special objects like e. g.
flask's request object which blows up on getattr access if no request
is active. thanks Thomas Waldmann.
- fix issue213 - allow to parametrize with values like numpy arrays that
do not support an __eq__ operator
- fix issue215 - split test_python.org into multiple files
- fix issue148 - @unittest.skip on classes is now recognized and avoids
calling setUpClass/tearDownClass, thanks Pavel Repin
- fix issue209 - reintroduce python2.4 support by depending on newer
pylib which re-introduced statement-finding for pre-AST interpreters
- nose support: only call setup if its a callable, thanks Andrew
Taumoefolau
- fix issue219 - add py2.4-3.3 classifiers to TROVE list
- in tracebacks *,** arg values are now shown next to normal arguments
(thanks Manuel Jacob)
- fix issue217 - support mock.patch with pytest's fixtures - note that
you need either mock-1.0.1 or the python3.3 builtin unittest.mock.
- fix issue127 - improve documentation for pytest_addoption() and
add a ``config.getoption(name)`` helper function for consistency.

View File

@@ -0,0 +1,39 @@
pytest-2.3.4: stabilization, more flexible selection via "-k expr"
===========================================================================
pytest-2.3.4 is a small stabilization release of the py.test tool
which offers uebersimple assertions, scalable fixture mechanisms
and deep customization for testing with Python. This release
comes with the following fixes and features:
- make "-k" option accept an expressions the same as with "-m" so that one
can write: -k "name1 or name2" etc. This is a slight usage incompatibility
if you used special syntax like "TestClass.test_method" which you now
need to write as -k "TestClass and test_method" to match a certain
method in a certain test class.
- allow to dynamically define markers via
item.keywords[...]=assignment integrating with "-m" option
- yielded test functions will now have autouse-fixtures active but
cannot accept fixtures as funcargs - it's anyway recommended to
rather use the post-2.0 parametrize features instead of yield, see:
http://pytest.org/latest/example/parametrize.html
- fix autouse-issue where autouse-fixtures would not be discovered
if defined in a a/conftest.py file and tests in a/tests/test_some.py
- fix issue226 - LIFO ordering for fixture teardowns
- fix issue224 - invocations with >256 char arguments now work
- fix issue91 - add/discuss package/directory level setups in example
- fixes related to autouse discovery and calling
Thanks in particular to Thomas Waldmann for spotting and reporting issues.
See
http://pytest.org/
for general information. To install or upgrade pytest:
pip install -U pytest # or
easy_install -U pytest
best,
holger krekel

View File

@@ -0,0 +1,97 @@
pytest-2.3.5: bug fixes and little improvements
===========================================================================
pytest-2.3.5 is a maintenance release with many bug fixes and little
improvements. See the changelog below for details. No backward
compatibility issues are foreseen and all plugins which worked with the
prior version are expected to work unmodified. Speaking of which, a
few interesting new plugins saw the light last month:
- pytest-instafail: show failure information while tests are running
- pytest-qt: testing of GUI applications written with QT/Pyside
- pytest-xprocess: managing external processes across test runs
- pytest-random: randomize test ordering
And several others like pytest-django saw maintenance releases.
For a more complete list, check out
https://pypi.python.org/pypi?%3Aaction=search&term=pytest&submit=search.
For general information see:
http://pytest.org/
To install or upgrade pytest:
pip install -U pytest # or
easy_install -U pytest
Particular thanks to Floris, Ronny, Benjamin and the many bug reporters
and fix providers.
may the fixtures be with you,
holger krekel
Changes between 2.3.4 and 2.3.5
-----------------------------------
- never consider a fixture function for test function collection
- allow re-running of test items / helps to fix pytest-reruntests plugin
and also help to keep less fixture/resource references alive
- put captured stdout/stderr into junitxml output even for passing tests
(thanks Adam Goucher)
- Issue 265 - integrate nose setup/teardown with setupstate
so it doesnt try to teardown if it did not setup
- issue 271 - dont write junitxml on slave nodes
- Issue 274 - dont try to show full doctest example
when doctest does not know the example location
- issue 280 - disable assertion rewriting on buggy CPython 2.6.0
- inject "getfixture()" helper to retrieve fixtures from doctests,
thanks Andreas Zeidler
- issue 259 - when assertion rewriting, be consistent with the default
source encoding of ASCII on Python 2
- issue 251 - report a skip instead of ignoring classes with init
- issue250 unicode/str mixes in parametrization names and values now works
- issue257, assertion-triggered compilation of source ending in a
comment line doesn't blow up in python2.5 (fixed through py>=1.4.13.dev6)
- fix --genscript option to generate standalone scripts that also
work with python3.3 (importer ordering)
- issue171 - in assertion rewriting, show the repr of some
global variables
- fix option help for "-k"
- move long description of distribution into README.rst
- improve docstring for metafunc.parametrize()
- fix bug where using capsys with pytest.set_trace() in a test
function would break when looking at capsys.readouterr()
- allow to specify prefixes starting with "_" when
customizing python_functions test discovery. (thanks Graham Horler)
- improve PYTEST_DEBUG tracing output by puting
extra data on a new lines with additional indent
- ensure OutcomeExceptions like skip/fail have initialized exception attributes
- issue 260 - don't use nose special setup on plain unittest cases
- fix issue134 - print the collect errors that prevent running specified test items
- fix issue266 - accept unicode in MarkEvaluator expressions

View File

@@ -26,7 +26,7 @@ you will see the return value of the function call::
$ py.test test_assert1.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 1 items
test_assert1.py F
@@ -110,7 +110,7 @@ if you run this module::
$ py.test test_assert2.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 1 items
test_assert2.py F

View File

@@ -64,7 +64,7 @@ of the failing function and hide the other one::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 2 items
test_module.py .F
@@ -78,7 +78,7 @@ of the failing function and hide the other one::
test_module.py:9: AssertionError
----------------------------- Captured stdout ------------------------------
setting up <function test_func2 at 0x2d63d70>
setting up <function test_func2 at 0x2d79f50>
==================== 1 failed, 1 passed in 0.01 seconds ====================
Accessing captured output from a test function

View File

@@ -17,7 +17,7 @@
#
# The full version, including alpha/beta/rc tags.
# The short X.Y version.
version = release = "2.3.3"
version = release = "2.3.4.1"
import sys, os
@@ -273,6 +273,19 @@ epub_copyright = u'2012, holger krekel et alii'
#epub_tocdup = True
# -- Options for texinfo output ------------------------------------------------
texinfo_documents = [
(master_doc, 'pytest', 'pytest Documentation',
('Holger Krekel@*Benjamin Peterson@*Ronny Pfannschmidt@*'
'Floris Bruynooghe@*others'),
'pytest',
'simple powerful testing with Pytho',
'Programming',
1),
]
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {'python': ('http://docs.python.org/', None),
# 'lib': ("http://docs.python.org/2.7library/", None),

View File

@@ -5,27 +5,28 @@
Contact channels
===================================
- `new issue tracker`_ to report bugs or suggest features (for version
2.0 and above). You may also peek at the `old issue tracker`_ but please
don't submit bugs there anymore.
- `pytest issue tracker`_ to report bugs or suggest features (for version
2.0 and above).
- `pytest on stackoverflow.com <http://stackoverflow.com/search?q=pytest>`_
to post questions with the tag ``pytest``. New Questions will usually
be seen by pytest users or developers.
be seen by pytest users or developers and answered quickly.
- `Testing In Python`_: a mailing list for Python testing tools and discussion.
- `py-dev developers list`_ pytest specific announcements and discussions.
- `pytest-dev at python.org (mailing list)`_ pytest specific announcements and discussions.
- `pytest-commit at python.org (mailing list)`_: for commits and new issues
- #pylib on irc.freenode.net IRC channel for random questions.
- private mail to Holger.Krekel at gmail com if you want to communicate sensitive issues
- `commit mailing list`_
- `merlinux.eu`_ offers on-site teaching and consulting services.
- `merlinux.eu`_ offers pytest and tox-related professional teaching and
consulting.
.. _`new issue tracker`: http://bitbucket.org/hpk42/pytest/issues/
.. _`pytest issue tracker`: http://bitbucket.org/hpk42/pytest/issues/
.. _`old issue tracker`: http://bitbucket.org/hpk42/py-trunk/issues/
.. _`merlinux.eu`: http://merlinux.eu
@@ -41,7 +42,7 @@ Contact channels
.. _FOAF: http://en.wikipedia.org/wiki/FOAF
.. _`py-dev`:
.. _`development mailing list`:
.. _`py-dev developers list`: http://codespeak.net/mailman/listinfo/py-dev
.. _`pytest-dev at python.org (mailing list)`: http://mail.python.org/mailman/listinfo/pytest-dev
.. _`py-svn`:
.. _`commit mailing list`: http://codespeak.net/mailman/listinfo/py-svn
.. _`pytest-commit at python.org (mailing list)`: http://mail.python.org/mailman/listinfo/pytest-commit

View File

@@ -44,9 +44,15 @@ then you can just invoke ``py.test`` without command line options::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 1 items
mymodule.py .
========================= 1 passed in 0.02 seconds =========================
It is possible to use fixtures using the ``getfixture`` helper::
# content of example.rst
>>> tmp = getfixture('tmpdir')
>>> ...

View File

@@ -66,7 +66,7 @@ class TestSpecialisedExplanations(object):
assert a == b
def test_eq_dict(self):
assert {'a': 0, 'b': 1} == {'a': 0, 'b': 2}
assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
def test_eq_set(self):
assert set([0, 10, 11, 12]) == set([0, 20, 21])

View File

@@ -29,5 +29,6 @@ The following examples aim at various use cases you might encounter.
simple.txt
parametrize.txt
markers.txt
special.txt
pythoncollection.txt
nonpython.txt

View File

@@ -19,6 +19,8 @@ You can "mark" a test function with custom metadata like this::
pass # perform some webtest test for your app
def test_something_quick():
pass
def test_another():
pass
.. versionadded:: 2.2
@@ -26,25 +28,72 @@ You can then restrict a test run to only run tests marked with ``webtest``::
$ py.test -v -m webtest
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 2 items
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items
test_server.py:3: test_send_http PASSED
=================== 1 tests deselected by "-m 'webtest'" ===================
================== 1 passed, 1 deselected in 0.02 seconds ==================
=================== 2 tests deselected by "-m 'webtest'" ===================
================== 1 passed, 2 deselected in 0.01 seconds ==================
Or the inverse, running all tests except the webtest ones::
$ py.test -v -m "not webtest"
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 2 items
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items
test_server.py:6: test_something_quick PASSED
test_server.py:8: test_another PASSED
================= 1 tests deselected by "-m 'not webtest'" =================
================== 1 passed, 1 deselected in 0.01 seconds ==================
================== 2 passed, 1 deselected in 0.01 seconds ==================
Using ``-k expr`` to select tests based on their name
-------------------------------------------------------
.. versionadded: 2.0/2.3.4
You can use the ``-k`` command line option to specify an expression
which implements a substring match on the test names instead of the
exact match on markers that ``-m`` provides. This makes it easy to
select tests based on their names::
$ py.test -v -k http # running with the above defined example module
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items
test_server.py:3: test_send_http PASSED
====================== 2 tests deselected by '-khttp' ======================
================== 1 passed, 2 deselected in 0.01 seconds ==================
And you can also run all tests except the ones that match the keyword::
$ py.test -k "not send_http" -v
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items
test_server.py:6: test_something_quick PASSED
test_server.py:8: test_another PASSED
================= 1 tests deselected by '-knot send_http' ==================
================== 2 passed, 1 deselected in 0.01 seconds ==================
Or to select "http" and "quick" tests::
$ py.test -k "http or quick" -v
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 3 items
test_server.py:3: test_send_http PASSED
test_server.py:6: test_something_quick PASSED
================= 1 tests deselected by '-khttp or quick' ==================
================== 2 passed, 1 deselected in 0.01 seconds ==================
Registering markers
-------------------------------------
@@ -137,46 +186,6 @@ in which case it will be applied to all functions and
methods defined in the module.
Using ``-k TEXT`` to select tests
----------------------------------------------------
You can use the ``-k`` command line option to only run tests with names matching
the given argument::
$ py.test -k send_http # running with the above defined examples
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
collected 4 items
test_server.py .
=================== 3 tests deselected by '-ksend_http' ====================
================== 1 passed, 3 deselected in 0.01 seconds ==================
And you can also run all tests except the ones that match the keyword::
$ py.test -k-send_http
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
collected 4 items
test_mark_classlevel.py ..
test_server.py .
=================== 1 tests deselected by '-k-send_http' ===================
================== 3 passed, 1 deselected in 0.01 seconds ==================
Or to only select the class::
$ py.test -kTestClass
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
collected 4 items
test_mark_classlevel.py ..
=================== 2 tests deselected by '-kTestClass' ====================
================== 2 passed, 2 deselected in 0.01 seconds ==================
.. _`adding a custom marker from a plugin`:
@@ -223,7 +232,7 @@ the test needs::
$ py.test -E stage2
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 1 items
test_someenv.py s
@@ -234,7 +243,7 @@ and here is one that specifies exactly the environment needed::
$ py.test -E stage1
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 1 items
test_someenv.py .
@@ -351,12 +360,12 @@ then you will see two test skipped and two executed tests as expected::
$ py.test -rs # this option reports skip reasons
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 4 items
test_plat.py s.s.
========================= short test summary info ==========================
SKIP [2] /tmp/doc-exec-57/conftest.py:12: cannot run on platform linux2
SKIP [2] /tmp/doc-exec-273/conftest.py:12: cannot run on platform linux2
=================== 2 passed, 2 skipped in 0.01 seconds ====================
@@ -364,7 +373,7 @@ Note that if you specify a platform via the marker-command line option like this
$ py.test -m linux2
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 4 items
test_plat.py .
@@ -373,3 +382,86 @@ Note that if you specify a platform via the marker-command line option like this
================== 1 passed, 3 deselected in 0.01 seconds ==================
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
Automatically adding markers based on test names
--------------------------------------------------------
.. regendoc:wipe
If you a test suite where test function names indicate a certain
type of test, you can implement a hook that automatically defines
markers so that you can use the ``-m`` option with it. Let's look
at this test module::
# content of test_module.py
def test_interface_simple():
assert 0
def test_interface_complex():
assert 0
def test_event_simple():
assert 0
def test_something_else():
assert 0
We want to dynamically define two markers and can do it in a
``conftest.py`` plugin::
# content of conftest.py
import pytest
def pytest_collection_modifyitems(items):
for item in items:
if "interface" in item.nodeid:
item.keywords["interface"] = pytest.mark.interface
elif "event" in item.nodeid:
item.keywords["event"] = pytest.mark.event
We can now use the ``-m option`` to select one set::
$ py.test -m interface --tb=short
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 4 items
test_module.py FF
================================= FAILURES =================================
__________________________ test_interface_simple ___________________________
test_module.py:3: in test_interface_simple
> assert 0
E assert 0
__________________________ test_interface_complex __________________________
test_module.py:6: in test_interface_complex
> assert 0
E assert 0
================== 2 tests deselected by "-m 'interface'" ==================
================== 2 failed, 2 deselected in 0.01 seconds ==================
or to select both "event" and "interface" tests::
$ py.test -m "interface or event" --tb=short
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 4 items
test_module.py FFF
================================= FAILURES =================================
__________________________ test_interface_simple ___________________________
test_module.py:3: in test_interface_simple
> assert 0
E assert 0
__________________________ test_interface_complex __________________________
test_module.py:6: in test_interface_complex
> assert 0
E assert 0
____________________________ test_event_simple _____________________________
test_module.py:9: in test_event_simple
> assert 0
E assert 0
============= 1 tests deselected by "-m 'interface or event'" ==============
================== 3 failed, 1 deselected in 0.02 seconds ==================

View File

@@ -27,7 +27,7 @@ now execute the test specification::
nonpython $ py.test test_simple.yml
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 2 items
test_simple.yml .F
@@ -37,7 +37,7 @@ now execute the test specification::
usecase execution failed
spec failed: 'some': 'other'
no further details known at this point.
==================== 1 failed, 1 passed in 0.04 seconds ====================
==================== 1 failed, 1 passed in 0.05 seconds ====================
You get one dot for the passing ``sub1: sub1`` check and one failure.
Obviously in the above ``conftest.py`` you'll want to implement a more
@@ -56,7 +56,7 @@ consulted when reporting in ``verbose`` mode::
nonpython $ py.test -v
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3 -- /home/hpk/p/pytest/.tox/regen/bin/python
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 2 items
test_simple.yml:1: usecase: ok PASSED
@@ -67,17 +67,17 @@ consulted when reporting in ``verbose`` mode::
usecase execution failed
spec failed: 'some': 'other'
no further details known at this point.
==================== 1 failed, 1 passed in 0.04 seconds ====================
==================== 1 failed, 1 passed in 0.05 seconds ====================
While developing your custom test collection and execution it's also
interesting to just look at the collection tree::
nonpython $ py.test --collectonly
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 2 items
<YamlFile 'test_simple.yml'>
<YamlItem 'ok'>
<YamlItem 'hello'>
============================= in 0.04 seconds =============================
============================= in 0.05 seconds =============================

View File

@@ -104,7 +104,7 @@ this is a fully self-contained example which you can run with::
$ py.test test_scenarios.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 4 items
test_scenarios.py ....
@@ -116,7 +116,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
$ py.test --collectonly test_scenarios.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 4 items
<Module 'test_scenarios.py'>
<Class 'TestSampleWithScenarios'>
@@ -180,7 +180,7 @@ Let's first see how it looks like at collection time::
$ py.test test_backends.py --collectonly
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 2 items
<Module 'test_backends.py'>
<Function 'test_db_initialized[d1]'>
@@ -195,7 +195,7 @@ And then when we run the test::
================================= FAILURES =================================
_________________________ test_db_initialized[d2] __________________________
db = <conftest.DB2 instance at 0x1d8aef0>
db = <conftest.DB2 instance at 0x2038f80>
def test_db_initialized(db):
# a dummy test
@@ -250,7 +250,7 @@ argument sets to use for each test function. Let's run it::
================================= FAILURES =================================
________________________ TestClass.test_equals[1-2] ________________________
self = <test_parametrize.TestClass instance at 0x1628cb0>, a = 1, b = 2
self = <test_parametrize.TestClass instance at 0x1338f80>, a = 1, b = 2
def test_equals(self, a, b):
> assert a == b
@@ -278,3 +278,73 @@ Running it results in some skips if we don't have all the python interpreters in
............sss............sss............sss............ssssssssssssssssss
========================= short test summary info ==========================
SKIP [27] /home/hpk/p/pytest/doc/en/example/multipython.py:21: 'python2.8' not found
Indirect parametrization of optional implementations/imports
--------------------------------------------------------------------
If you want to compare the outcomes of several implementations of a given
API, you can write test functions that receive the already imported implementations
and get skipped in case the implementation is not importable/available. Let's
say we have a "base" implementation and the other (possibly optimized ones)
need to provide similar results::
# content of conftest.py
import pytest
@pytest.fixture(scope="session")
def basemod(request):
return pytest.importorskip("base")
@pytest.fixture(scope="session", params=["opt1", "opt2"])
def optmod(request):
return pytest.importorskip(request.param)
And then a base implementation of a simple function::
# content of base.py
def func1():
return 1
And an optimized version::
# content of opt1.py
def func1():
return 1.0001
And finally a little test module::
# content of test_module.py
def test_func1(basemod, optmod):
assert round(basemod.func1(), 3) == round(optmod.func1(), 3)
If you run this with reporting for skips enabled::
$ py.test -rs test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 2 items
test_module.py .s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-275/conftest.py:10: could not import 'opt2'
=================== 1 passed, 1 skipped in 0.01 seconds ====================
You'll see that we don't have a ``opt2`` module and thus the second test run
of our ``test_func1`` was skipped. A few notes:
- the fixture functions in the ``conftest.py`` file are "session-scoped" because we
don't need to import more than once
- if you have multiple test functions and a skipped import, you will see
the ``[1]`` count increasing in the report
- you can put :ref:`@pytest.mark.parametrize <@pytest.mark.parametrize>` style
parametrization on the test functions to parametrize input/output
values as well.

View File

@@ -43,7 +43,7 @@ then the test collection looks like this::
$ py.test --collectonly
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 2 items
<Module 'check_myapp.py'>
<Class 'CheckMyApp'>
@@ -82,7 +82,7 @@ You can always peek at the collection tree without running tests like this::
. $ py.test --collectonly pythoncollection.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 3 items
<Module 'pythoncollection.py'>
<Function 'test_function'>
@@ -135,7 +135,7 @@ interpreters and will leave out the setup.py file::
$ py.test --collectonly
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 1 items
<Module 'pkg/module_py2.py'>
<Function 'test_only_on_python2'>

View File

@@ -13,7 +13,7 @@ get on the terminal - we are working on that):
assertion $ py.test failure_demo.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 39 items
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
@@ -30,7 +30,7 @@ get on the terminal - we are working on that):
failure_demo.py:15: AssertionError
_________________________ TestFailing.test_simple __________________________
self = <failure_demo.TestFailing object at 0x1136710>
self = <failure_demo.TestFailing object at 0x1445e10>
def test_simple(self):
def f():
@@ -40,13 +40,13 @@ get on the terminal - we are working on that):
> assert f() == g()
E assert 42 == 43
E + where 42 = <function f at 0x1146410>()
E + and 43 = <function g at 0x1146488>()
E + where 42 = <function f at 0x137c6e0>()
E + and 43 = <function g at 0x137c758>()
failure_demo.py:28: AssertionError
____________________ TestFailing.test_simple_multiline _____________________
self = <failure_demo.TestFailing object at 0x11329d0>
self = <failure_demo.TestFailing object at 0x135a1d0>
def test_simple_multiline(self):
otherfunc_multi(
@@ -66,19 +66,19 @@ get on the terminal - we are working on that):
failure_demo.py:11: AssertionError
___________________________ TestFailing.test_not ___________________________
self = <failure_demo.TestFailing object at 0x10d09d0>
self = <failure_demo.TestFailing object at 0x1458ed0>
def test_not(self):
def f():
return 42
> assert not f()
E assert not 42
E + where 42 = <function f at 0x1146848>()
E + where 42 = <function f at 0x137caa0>()
failure_demo.py:38: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x10ca210>
self = <failure_demo.TestSpecialisedExplanations object at 0x14451d0>
def test_eq_text(self):
> assert 'spam' == 'eggs'
@@ -89,7 +89,7 @@ get on the terminal - we are working on that):
failure_demo.py:42: AssertionError
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
self = <failure_demo.TestSpecialisedExplanations object at 0x11368d0>
self = <failure_demo.TestSpecialisedExplanations object at 0x1458c90>
def test_eq_similar_text(self):
> assert 'foo 1 bar' == 'foo 2 bar'
@@ -102,7 +102,7 @@ get on the terminal - we are working on that):
failure_demo.py:45: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x11340d0>
self = <failure_demo.TestSpecialisedExplanations object at 0x1434390>
def test_eq_multiline_text(self):
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
@@ -115,15 +115,15 @@ get on the terminal - we are working on that):
failure_demo.py:48: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x10cfd90>
self = <failure_demo.TestSpecialisedExplanations object at 0x1459f50>
def test_eq_long_text(self):
a = '1'*100 + 'a' + '2'*100
b = '1'*100 + 'b' + '2'*100
> assert a == b
E assert '111111111111...2222222222222' == '1111111111111...2222222222222'
E Skipping 90 identical leading characters in diff
E Skipping 91 identical trailing characters in diff
E Skipping 90 identical leading characters in diff, use -v to show
E Skipping 91 identical trailing characters in diff, use -v to show
E - 1111111111a222222222
E ? ^
E + 1111111111b222222222
@@ -132,15 +132,15 @@ get on the terminal - we are working on that):
failure_demo.py:53: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x10d0b10>
self = <failure_demo.TestSpecialisedExplanations object at 0x135a790>
def test_eq_long_text_multiline(self):
a = '1\n'*100 + 'a' + '2\n'*100
b = '1\n'*100 + 'b' + '2\n'*100
> assert a == b
E assert '1\n1\n1\n1\n...n2\n2\n2\n2\n' == '1\n1\n1\n1\n1...n2\n2\n2\n2\n'
E Skipping 190 identical leading characters in diff
E Skipping 191 identical trailing characters in diff
E Skipping 190 identical leading characters in diff, use -v to show
E Skipping 191 identical trailing characters in diff, use -v to show
E 1
E 1
E 1
@@ -156,7 +156,7 @@ get on the terminal - we are working on that):
failure_demo.py:58: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1142dd0>
self = <failure_demo.TestSpecialisedExplanations object at 0x138dfd0>
def test_eq_list(self):
> assert [0, 1, 2] == [0, 1, 3]
@@ -166,7 +166,7 @@ get on the terminal - we are working on that):
failure_demo.py:61: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x1136850>
self = <failure_demo.TestSpecialisedExplanations object at 0x135a990>
def test_eq_list_long(self):
a = [0]*100 + [1] + [3]*100
@@ -178,20 +178,23 @@ get on the terminal - we are working on that):
failure_demo.py:66: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1134e10>
self = <failure_demo.TestSpecialisedExplanations object at 0x1459310>
def test_eq_dict(self):
> assert {'a': 0, 'b': 1} == {'a': 0, 'b': 2}
E assert {'a': 0, 'b': 1} == {'a': 0, 'b': 2}
E - {'a': 0, 'b': 1}
E ? ^
E + {'a': 0, 'b': 2}
E ? ^
> assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
E assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
E Hiding 1 identical items, use -v to show
E Differing items:
E {'b': 1} != {'b': 2}
E Left contains more items:
E {'c': 0}
E Right contains more items:
E {'d': 0}
failure_demo.py:69: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________
self = <failure_demo.TestSpecialisedExplanations object at 0x1169c90>
self = <failure_demo.TestSpecialisedExplanations object at 0x1434310>
def test_eq_set(self):
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
@@ -207,7 +210,7 @@ get on the terminal - we are working on that):
failure_demo.py:72: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
self = <failure_demo.TestSpecialisedExplanations object at 0x1142c50>
self = <failure_demo.TestSpecialisedExplanations object at 0x138ded0>
def test_eq_longer_list(self):
> assert [1,2] == [1,2,3]
@@ -217,7 +220,7 @@ get on the terminal - we are working on that):
failure_demo.py:75: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x10d0d90>
self = <failure_demo.TestSpecialisedExplanations object at 0x1459e10>
def test_in_list(self):
> assert 1 in [0, 2, 3, 4, 5]
@@ -226,7 +229,7 @@ get on the terminal - we are working on that):
failure_demo.py:78: AssertionError
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x10e0110>
self = <failure_demo.TestSpecialisedExplanations object at 0x1434950>
def test_not_in_text_multiline(self):
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
@@ -244,7 +247,7 @@ get on the terminal - we are working on that):
failure_demo.py:82: AssertionError
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x10ca7d0>
self = <failure_demo.TestSpecialisedExplanations object at 0x138dbd0>
def test_not_in_text_single(self):
text = 'single foo line'
@@ -257,7 +260,7 @@ get on the terminal - we are working on that):
failure_demo.py:86: AssertionError
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
self = <failure_demo.TestSpecialisedExplanations object at 0x1142750>
self = <failure_demo.TestSpecialisedExplanations object at 0x14593d0>
def test_not_in_text_single_long(self):
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
@@ -270,7 +273,7 @@ get on the terminal - we are working on that):
failure_demo.py:90: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
self = <failure_demo.TestSpecialisedExplanations object at 0x1134410>
self = <failure_demo.TestSpecialisedExplanations object at 0x1459650>
def test_not_in_text_single_long_term(self):
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
@@ -289,7 +292,7 @@ get on the terminal - we are working on that):
i = Foo()
> assert i.b == 2
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x10e07d0>.b
E + where 1 = <failure_demo.Foo object at 0x1434850>.b
failure_demo.py:101: AssertionError
_________________________ test_attribute_instance __________________________
@@ -299,8 +302,8 @@ get on the terminal - we are working on that):
b = 1
> assert Foo().b == 2
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x1132390>.b
E + where <failure_demo.Foo object at 0x1132390> = <class 'failure_demo.Foo'>()
E + where 1 = <failure_demo.Foo object at 0x1459dd0>.b
E + where <failure_demo.Foo object at 0x1459dd0> = <class 'failure_demo.Foo'>()
failure_demo.py:107: AssertionError
__________________________ test_attribute_failure __________________________
@@ -316,7 +319,7 @@ get on the terminal - we are working on that):
failure_demo.py:116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <failure_demo.Foo object at 0x1136fd0>
self = <failure_demo.Foo object at 0x1434150>
def _get_b(self):
> raise Exception('Failed to get attrib')
@@ -332,15 +335,15 @@ get on the terminal - we are working on that):
b = 2
> assert Foo().b == Bar().b
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x1134c50>.b
E + where <failure_demo.Foo object at 0x1134c50> = <class 'failure_demo.Foo'>()
E + and 2 = <failure_demo.Bar object at 0x1134790>.b
E + where <failure_demo.Bar object at 0x1134790> = <class 'failure_demo.Bar'>()
E + where 1 = <failure_demo.Foo object at 0x14590d0>.b
E + where <failure_demo.Foo object at 0x14590d0> = <class 'failure_demo.Foo'>()
E + and 2 = <failure_demo.Bar object at 0x1459b10>.b
E + where <failure_demo.Bar object at 0x1459b10> = <class 'failure_demo.Bar'>()
failure_demo.py:124: AssertionError
__________________________ TestRaises.test_raises __________________________
self = <failure_demo.TestRaises instance at 0x10dc098>
self = <failure_demo.TestRaises instance at 0x13a0d88>
def test_raises(self):
s = 'qwe'
@@ -352,10 +355,10 @@ get on the terminal - we are working on that):
> int(s)
E ValueError: invalid literal for int() with base 10: 'qwe'
<0-codegen /home/hpk/p/pytest/.tox/regen/lib/python2.7/site-packages/_pytest/python.py:851>:1: ValueError
<0-codegen /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:858>:1: ValueError
______________________ TestRaises.test_raises_doesnt _______________________
self = <failure_demo.TestRaises instance at 0x10d8320>
self = <failure_demo.TestRaises instance at 0x145fcf8>
def test_raises_doesnt(self):
> raises(IOError, "int('3')")
@@ -364,7 +367,7 @@ get on the terminal - we are working on that):
failure_demo.py:136: Failed
__________________________ TestRaises.test_raise ___________________________
self = <failure_demo.TestRaises instance at 0x10c0680>
self = <failure_demo.TestRaises instance at 0x13a9ea8>
def test_raise(self):
> raise ValueError("demo error")
@@ -373,7 +376,7 @@ get on the terminal - we are working on that):
failure_demo.py:139: ValueError
________________________ TestRaises.test_tupleerror ________________________
self = <failure_demo.TestRaises instance at 0x11604d0>
self = <failure_demo.TestRaises instance at 0x13843f8>
def test_tupleerror(self):
> a,b = [1]
@@ -382,7 +385,7 @@ get on the terminal - we are working on that):
failure_demo.py:142: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
self = <failure_demo.TestRaises instance at 0x10e2290>
self = <failure_demo.TestRaises instance at 0x14532d8>
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
l = [1,2,3]
@@ -395,7 +398,7 @@ get on the terminal - we are working on that):
l is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________
self = <failure_demo.TestRaises instance at 0x10e2f80>
self = <failure_demo.TestRaises instance at 0x139d290>
def test_some_error(self):
> if namenotexi:
@@ -423,7 +426,7 @@ get on the terminal - we are working on that):
<2-codegen 'abc-123' /home/hpk/p/pytest/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x10d1b90>
self = <failure_demo.TestMoreErrors instance at 0x137d758>
def test_complex_error(self):
def f():
@@ -452,7 +455,7 @@ get on the terminal - we are working on that):
failure_demo.py:5: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________
self = <failure_demo.TestMoreErrors instance at 0x114f3b0>
self = <failure_demo.TestMoreErrors instance at 0x13a5200>
def test_z1_unpack_error(self):
l = []
@@ -462,7 +465,7 @@ get on the terminal - we are working on that):
failure_demo.py:179: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x11496c8>
self = <failure_demo.TestMoreErrors instance at 0x1395290>
def test_z2_type_error(self):
l = 3
@@ -472,19 +475,19 @@ get on the terminal - we are working on that):
failure_demo.py:183: TypeError
______________________ TestMoreErrors.test_startswith ______________________
self = <failure_demo.TestMoreErrors instance at 0x10cec20>
self = <failure_demo.TestMoreErrors instance at 0x137f200>
def test_startswith(self):
s = "123"
g = "456"
> assert s.startswith(g)
E assert <built-in method startswith of str object at 0x113b918>('456')
E + where <built-in method startswith of str object at 0x113b918> = '123'.startswith
E assert <built-in method startswith of str object at 0x143f288>('456')
E + where <built-in method startswith of str object at 0x143f288> = '123'.startswith
failure_demo.py:188: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________
self = <failure_demo.TestMoreErrors instance at 0x10c87a0>
self = <failure_demo.TestMoreErrors instance at 0x145fb00>
def test_startswith_nested(self):
def f():
@@ -492,15 +495,15 @@ get on the terminal - we are working on that):
def g():
return "456"
> assert f().startswith(g())
E assert <built-in method startswith of str object at 0x113b918>('456')
E + where <built-in method startswith of str object at 0x113b918> = '123'.startswith
E + where '123' = <function f at 0x10bea28>()
E + and '456' = <function g at 0x10beaa0>()
E assert <built-in method startswith of str object at 0x143f288>('456')
E + where <built-in method startswith of str object at 0x143f288> = '123'.startswith
E + where '123' = <function f at 0x13abaa0>()
E + and '456' = <function g at 0x13ab578>()
failure_demo.py:195: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________
self = <failure_demo.TestMoreErrors instance at 0x10c5488>
self = <failure_demo.TestMoreErrors instance at 0x139cd40>
def test_global_func(self):
> assert isinstance(globf(42), float)
@@ -510,18 +513,18 @@ get on the terminal - we are working on that):
failure_demo.py:198: AssertionError
_______________________ TestMoreErrors.test_instance _______________________
self = <failure_demo.TestMoreErrors instance at 0x113f710>
self = <failure_demo.TestMoreErrors instance at 0x13593b0>
def test_instance(self):
self.x = 6*7
> assert self.x != 42
E assert 42 != 42
E + where 42 = <failure_demo.TestMoreErrors instance at 0x113f710>.x
E + where 42 = <failure_demo.TestMoreErrors instance at 0x13593b0>.x
failure_demo.py:202: AssertionError
_______________________ TestMoreErrors.test_compare ________________________
self = <failure_demo.TestMoreErrors instance at 0x10bae18>
self = <failure_demo.TestMoreErrors instance at 0x1465d40>
def test_compare(self):
> assert globf(10) < 5
@@ -531,7 +534,7 @@ get on the terminal - we are working on that):
failure_demo.py:205: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________
self = <failure_demo.TestMoreErrors instance at 0x1160248>
self = <failure_demo.TestMoreErrors instance at 0x1456ea8>
def test_try_finally(self):
x = 1
@@ -540,4 +543,4 @@ get on the terminal - we are working on that):
E assert 1 == 0
failure_demo.py:210: AssertionError
======================== 39 failed in 0.25 seconds =========================
======================== 39 failed in 0.21 seconds =========================

View File

@@ -106,7 +106,7 @@ directory with the above conftest.py::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 0 items
============================= in 0.00 seconds =============================
@@ -150,12 +150,12 @@ and when running it will see a skipped "slow" test::
$ py.test -rs # "-rs" means report details on the little 's'
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 2 items
test_module.py .s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-62/conftest.py:9: need --runslow option to run
SKIP [1] /tmp/doc-exec-278/conftest.py:9: need --runslow option to run
=================== 1 passed, 1 skipped in 0.01 seconds ====================
@@ -163,7 +163,7 @@ Or run it including the ``slow`` marked test::
$ py.test --runslow
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 2 items
test_module.py ..
@@ -253,7 +253,7 @@ which will add the string to the test header accordingly::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
project deps: mylib-1.1
collected 0 items
@@ -276,7 +276,7 @@ which will add info only when run with "--v"::
$ py.test -v
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3 -- /home/hpk/p/pytest/.tox/regen/bin/python
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
info1: did you know that ...
did you?
collecting ... collected 0 items
@@ -287,7 +287,7 @@ and nothing when run plainly::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 0 items
============================= in 0.00 seconds =============================
@@ -319,7 +319,7 @@ Now we can profile which test functions execute the slowest::
$ py.test --durations=3
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 3 items
test_some_are_slow.py ...
@@ -327,7 +327,7 @@ Now we can profile which test functions execute the slowest::
========================= slowest 3 test durations =========================
0.20s call test_some_are_slow.py::test_funcslow2
0.10s call test_some_are_slow.py::test_funcslow1
0.00s call test_some_are_slow.py::test_funcfast
0.00s setup test_some_are_slow.py::test_funcfast
========================= 3 passed in 0.31 seconds =========================
incremental testing - test steps
@@ -380,7 +380,7 @@ If we run this::
$ py.test -rx
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 4 items
test_step.py .Fx.
@@ -388,7 +388,7 @@ If we run this::
================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling instance at 0x2677b90>
self = <test_step.TestUserHandling instance at 0x282b8c0>
def test_modification(self):
> assert 0
@@ -398,8 +398,283 @@ If we run this::
========================= short test summary info ==========================
XFAIL test_step.py::TestUserHandling::()::test_deletion
reason: previous test failed (test_modification)
============== 1 failed, 2 passed, 1 xfailed in 0.02 seconds ===============
============== 1 failed, 2 passed, 1 xfailed in 0.01 seconds ===============
We'll see that ``test_deletion`` was not executed because ``test_modification``
failed. It is reported as an "expected failure".
Package/Directory-level fixtures (setups)
-------------------------------------------------------
If you have nested test directories, you can have per-directory fixture scopes
by placing fixture functions in a ``conftest.py`` file in that directory
You can use all types of fixtures including :ref:`autouse fixtures
<autouse fixtures>` which are the equivalent of xUnit's setup/teardown
concept. It's however recommended to have explicit fixture references in your
tests or test classes rather than relying on implicitely executing
setup/teardown functions, especially if they are far away from the actual tests.
Here is a an example for making a ``db`` fixture available in a directory::
# content of a/conftest.py
import pytest
class DB:
pass
@pytest.fixture(scope="session")
def db():
return DB()
and then a test module in that directory::
# content of a/test_db.py
def test_a1(db):
assert 0, db # to show value
another test module::
# content of a/test_db2.py
def test_a2(db):
assert 0, db # to show value
and then a module in a sister directory which will not see
the ``db`` fixture::
# content of b/test_error.py
def test_root(db): # no db here, will error out
pass
We can run this::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 7 items
test_step.py .Fx.
a/test_db.py F
a/test_db2.py F
b/test_error.py E
================================== ERRORS ==================================
_______________________ ERROR at setup of test_root ________________________
file /tmp/doc-exec-278/b/test_error.py, line 1
def test_root(db): # no db here, will error out
fixture 'db' not found
available fixtures: pytestconfig, recwarn, monkeypatch, capfd, capsys, tmpdir
use 'py.test --fixtures [testpath]' for help on them.
/tmp/doc-exec-278/b/test_error.py:1
================================= FAILURES =================================
____________________ TestUserHandling.test_modification ____________________
self = <test_step.TestUserHandling instance at 0x26145f0>
def test_modification(self):
> assert 0
E assert 0
test_step.py:9: AssertionError
_________________________________ test_a1 __________________________________
db = <conftest.DB instance at 0x26211b8>
def test_a1(db):
> assert 0, db # to show value
E AssertionError: <conftest.DB instance at 0x26211b8>
a/test_db.py:2: AssertionError
_________________________________ test_a2 __________________________________
db = <conftest.DB instance at 0x26211b8>
def test_a2(db):
> assert 0, db # to show value
E AssertionError: <conftest.DB instance at 0x26211b8>
a/test_db2.py:2: AssertionError
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.03 seconds ==========
The two test modules in the ``a`` directory see the same ``db`` fixture instance
while the one test in the sister-directory ``b`` doesn't see it. We could of course
also define a ``db`` fixture in that sister directory's ``conftest.py`` file.
Note that each fixture is only instantiated if there is a test actually needing
it (unless you use "autouse" fixture which are always executed ahead of the first test
executing).
post-process test reports / failures
---------------------------------------
If you want to postprocess test reports and need access to the executing
environment you can implement a hook that gets called when the test
"report" object is about to be created. Here we write out all failing
test calls and also access a fixture (if it was used by the test) in
case you want to query/look at it during your post processing. In our
case we just write some informations out to a ``failures`` file::
# content of conftest.py
import pytest
import os.path
@pytest.mark.tryfirst
def pytest_runtest_makereport(item, call, __multicall__):
# execute all other hooks to obtain the report object
rep = __multicall__.execute()
# we only look at actual failing test calls, not setup/teardown
if rep.when == "call" and rep.failed:
mode = "a" if os.path.exists("failures") else "w"
with open("failures", mode) as f:
# let's also access a fixture for the fun of it
if "tmpdir" in item.funcargs:
extra = " (%s)" % item.funcargs["tmpdir"]
else:
extra = ""
f.write(rep.nodeid + extra + "\n")
return rep
if you then have failing tests::
# content of test_module.py
def test_fail1(tmpdir):
assert 0
def test_fail2():
assert 0
and run them::
$ py.test test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 2 items
test_module.py FF
================================= FAILURES =================================
________________________________ test_fail1 ________________________________
tmpdir = local('/tmp/pytest-326/test_fail10')
def test_fail1(tmpdir):
> assert 0
E assert 0
test_module.py:2: AssertionError
________________________________ test_fail2 ________________________________
def test_fail2():
> assert 0
E assert 0
test_module.py:4: AssertionError
========================= 2 failed in 0.02 seconds =========================
you will have a "failures" file which contains the failing test ids::
$ cat failures
test_module.py::test_fail1 (/tmp/pytest-326/test_fail10)
test_module.py::test_fail2
Making test result information available in fixtures
-----------------------------------------------------------
.. regendoc:wipe
If you want to make test result reports available in fixture finalizers
here is a little example implemented via a local plugin::
# content of conftest.py
import pytest
@pytest.mark.tryfirst
def pytest_runtest_makereport(item, call, __multicall__):
# execute all other hooks to obtain the report object
rep = __multicall__.execute()
# set an report attribute for each phase of a call, which can
# be "setup", "call", "teardown"
setattr(item, "rep_" + rep.when, rep)
return rep
@pytest.fixture
def something(request):
def fin():
# request.node is an "item" because we use the default
# "function" scope
if request.node.rep_setup.failed:
print "setting up a test failed!", request.node.nodeid
elif request.node.rep_setup.passed:
if request.node.rep_call.failed:
print "executing test failed", request.node.nodeid
request.addfinalizer(fin)
if you then have failing tests::
# content of test_module.py
import pytest
@pytest.fixture
def other():
assert 0
def test_setup_fails(something, other):
pass
def test_call_fails(something):
assert 0
def test_fail2():
assert 0
and run it::
$ py.test -s test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 3 items
test_module.py EFF
================================== ERRORS ==================================
____________________ ERROR at setup of test_setup_fails ____________________
@pytest.fixture
def other():
> assert 0
E assert 0
test_module.py:6: AssertionError
================================= FAILURES =================================
_____________________________ test_call_fails ______________________________
something = None
def test_call_fails(something):
> assert 0
E assert 0
test_module.py:12: AssertionError
________________________________ test_fail2 ________________________________
def test_fail2():
> assert 0
E assert 0
test_module.py:15: AssertionError
==================== 2 failed, 1 error in 0.01 seconds =====================
setting up a test failed! test_module.py::test_setup_fails
executing test failed test_module.py::test_call_fails
You'll see that the fixture finalizers could use the precise reporting
information.

View File

@@ -0,0 +1,72 @@
A sesssion-fixture which can look at all collected tests
----------------------------------------------------------------
A session-scoped fixture effectively has access to all
collected test items. Here is an example of a fixture
function which walks all collected tests and looks
if their test class defines a ``callme`` method and
calls it::
# content of conftest.py
import pytest
@pytest.fixture(scope="session", autouse=True)
def callattr_ahead_of_alltests(request):
print "callattr_ahead_of_alltests called"
seen = set([None])
session = request.node
for item in session.items:
cls = item.getparent(pytest.Class)
if cls not in seen:
if hasattr(cls.obj, "callme"):
cls.obj.callme()
seen.add(cls)
test classes may now define a ``callme`` method which
will be called ahead of running any tests::
# content of test_module.py
class TestHello:
@classmethod
def callme(cls):
print "callme called!"
def test_method1(self):
print "test_method1 called"
def test_method2(self):
print "test_method1 called"
class TestOther:
@classmethod
def callme(cls):
print "callme other called"
def test_other(self):
print "test other"
# works with unittest as well ...
import unittest
class SomeTest(unittest.TestCase):
@classmethod
def callme(self):
print "SomeTest callme called"
def test_unit1(self):
print "test_unit1 method called"
If you run this without output capturing::
$ py.test -q -s test_module.py
....
callattr_ahead_of_alltests called
callme called!
callme other called
SomeTest callme called
test_method1 called
test_method1 called
test other
test_unit1 method called

View File

@@ -29,7 +29,7 @@ and does not handle Deferreds returned from a test in pytest style.
If you are using trial's unittest.TestCase chances are that you can
just run your tests even if you return Deferreds. In addition,
there also is a dedicated `pytest-twisted
<http://pypi.python.org/pypi/pytest-twisted`` plugin which allows to
<http://pypi.python.org/pypi/pytest-twisted>`_ plugin which allows to
return deferreds from pytest-style tests, allowing to use
:ref:`fixtures` and other features.

View File

@@ -71,7 +71,7 @@ marked ``smtp`` fixture function. Running the test looks like this::
$ py.test test_smtpsimple.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 1 items
test_smtpsimple.py F
@@ -79,7 +79,7 @@ marked ``smtp`` fixture function. Running the test looks like this::
================================= FAILURES =================================
________________________________ test_ehlo _________________________________
smtp = <smtplib.SMTP instance at 0x1992a70>
smtp = <smtplib.SMTP instance at 0x226cc20>
def test_ehlo(smtp):
response, msg = smtp.ehlo()
@@ -89,7 +89,7 @@ marked ``smtp`` fixture function. Running the test looks like this::
E assert 0
test_smtpsimple.py:12: AssertionError
========================= 1 failed in 0.30 seconds =========================
========================= 1 failed in 0.20 seconds =========================
In the failure traceback we see that the test function was called with a
``smtp`` argument, the ``smtplib.SMTP()`` instance created by the fixture
@@ -168,7 +168,7 @@ function::
return smtplib.SMTP("merlinux.eu")
The name of the fixture again is ``smtp`` and you can access its result by
listing the name ``smtp`` as an input parameter in any test or setup
listing the name ``smtp`` as an input parameter in any test or fixture
function (in or below the directory where ``conftest.py`` is located)::
# content of test_module.py
@@ -189,7 +189,7 @@ inspect what is going on and can now run the tests::
$ py.test test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 2 items
test_module.py FF
@@ -197,7 +197,7 @@ inspect what is going on and can now run the tests::
================================= FAILURES =================================
________________________________ test_ehlo _________________________________
smtp = <smtplib.SMTP instance at 0x2b8a248>
smtp = <smtplib.SMTP instance at 0x18a6368>
def test_ehlo(smtp):
response = smtp.ehlo()
@@ -209,7 +209,7 @@ inspect what is going on and can now run the tests::
test_module.py:6: AssertionError
________________________________ test_noop _________________________________
smtp = <smtplib.SMTP instance at 0x2b8a248>
smtp = <smtplib.SMTP instance at 0x18a6368>
def test_noop(smtp):
response = smtp.noop()
@@ -218,7 +218,7 @@ inspect what is going on and can now run the tests::
E assert 0
test_module.py:11: AssertionError
========================= 2 failed in 0.48 seconds =========================
========================= 2 failed in 0.26 seconds =========================
You see the two ``assert 0`` failing and more importantly you can also see
that the same (module-scoped) ``smtp`` object was passed into the two
@@ -271,7 +271,7 @@ using it has executed::
$ py.test -s -q --tb=no
FF
finalizing <smtplib.SMTP instance at 0x1584908>
finalizing <smtplib.SMTP instance at 0x1e10248>
We see that the ``smtp`` instance is finalized after the two
tests using it tests executed. If we had specified ``scope='function'``
@@ -298,8 +298,6 @@ Running it::
> assert 0, smtp.helo()
E AssertionError: (250, 'mail.python.org')
.. _`request`: :py:class:`_pytest.python.FixtureRequest`
.. _`fixture-parametrize`:
Parametrizing a fixture
@@ -315,7 +313,7 @@ configured in multiple ways.
Extending the previous example, we can flag the fixture to create two
``smtp`` fixture instances which will cause all tests using the fixture
to run twice. The fixture function gets access to each parameter
through the special `request`_ object::
through the special :py:class:`request <FixtureRequest>` object::
# content of conftest.py
import pytest
@@ -342,7 +340,7 @@ So let's just do another run::
================================= FAILURES =================================
__________________________ test_ehlo[merlinux.eu] __________________________
smtp = <smtplib.SMTP instance at 0x2368248>
smtp = <smtplib.SMTP instance at 0x1b38a28>
def test_ehlo(smtp):
response = smtp.ehlo()
@@ -354,7 +352,7 @@ So let's just do another run::
test_module.py:6: AssertionError
__________________________ test_noop[merlinux.eu] __________________________
smtp = <smtplib.SMTP instance at 0x2368248>
smtp = <smtplib.SMTP instance at 0x1b38a28>
def test_noop(smtp):
response = smtp.noop()
@@ -365,18 +363,18 @@ So let's just do another run::
test_module.py:11: AssertionError
________________________ test_ehlo[mail.python.org] ________________________
smtp = <smtplib.SMTP instance at 0x2377680>
smtp = <smtplib.SMTP instance at 0x1b496c8>
def test_ehlo(smtp):
response = smtp.ehlo()
assert response[0] == 250
> assert "merlinux" in response[1]
E assert 'merlinux' in 'mail.python.org\nSIZE 10240000\nETRN\nSTARTTLS\nENHANCEDSTATUSCODES\n8BITMIME\nDSN'
E assert 'merlinux' in 'mail.python.org\nSIZE 25600000\nETRN\nSTARTTLS\nENHANCEDSTATUSCODES\n8BITMIME\nDSN'
test_module.py:5: AssertionError
________________________ test_noop[mail.python.org] ________________________
smtp = <smtplib.SMTP instance at 0x2377680>
smtp = <smtplib.SMTP instance at 0x1b496c8>
def test_noop(smtp):
response = smtp.noop()
@@ -424,13 +422,13 @@ Here we declare an ``app`` fixture which receives the previously defined
$ py.test -v test_appsetup.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3 -- /home/hpk/p/pytest/.tox/regen/bin/python
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 2 items
test_appsetup.py:12: test_smtp_exists[merlinux.eu] PASSED
test_appsetup.py:12: test_smtp_exists[mail.python.org] PASSED
========================= 2 passed in 6.79 seconds =========================
========================= 2 passed in 5.38 seconds =========================
Due to the parametrization of ``smtp`` the test will run twice with two
different ``App`` instances and respective smtp servers. There is no
@@ -489,7 +487,7 @@ Let's run the tests in verbose mode and with looking at the print-output::
$ py.test -v -s test_module.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3 -- /home/hpk/p/pytest/.tox/regen/bin/python
platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python
collecting ... collected 8 items
test_module.py:16: test_0[1] PASSED

View File

@@ -23,7 +23,7 @@ Installation options::
To check your installation has installed the correct version::
$ py.test --version
This is py.test version 2.3.3, imported from /home/hpk/p/pytest/.tox/regen/lib/python2.7/site-packages/pytest.pyc
This is py.test version 2.3.5, imported from /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/pytest.py
If you get an error checkout :ref:`installation issues`.
@@ -45,7 +45,7 @@ That's it. You can execute the test function now::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 1 items
test_sample.py F
@@ -122,7 +122,7 @@ run the module by passing its filename::
================================= FAILURES =================================
____________________________ TestClass.test_two ____________________________
self = <test_class.TestClass instance at 0x22a4d40>
self = <test_class.TestClass instance at 0x315b488>
def test_two(self):
x = "hello"
@@ -157,7 +157,7 @@ before performing the test function call. Let's just run it::
================================= FAILURES =================================
_____________________________ test_needsfiles ______________________________
tmpdir = local('/tmp/pytest-594/test_needsfiles0')
tmpdir = local('/tmp/pytest-322/test_needsfiles0')
def test_needsfiles(tmpdir):
print tmpdir
@@ -166,7 +166,7 @@ before performing the test function call. Let's just run it::
test_tmpdir.py:3: AssertionError
----------------------------- Captured stdout ------------------------------
/tmp/pytest-594/test_needsfiles0
/tmp/pytest-322/test_needsfiles0
Before the test runs, a unique-per-test-invocation temporary directory
was created. More info at :ref:`tmpdir handling`.

View File

@@ -73,7 +73,7 @@ this to your ``setup.py`` file::
pass
def run(self):
import sys,subprocess
errno = subprocess.call([sys.executable, 'runtest.py'])
errno = subprocess.call([sys.executable, 'runtests.py'])
raise SystemExit(errno)
setup(
#...,
@@ -85,7 +85,7 @@ If you now type::
python setup.py test
this will execute your tests using ``runtest.py``. As this is a
this will execute your tests using ``runtests.py``. As this is a
standalone version of ``py.test`` no prior installation whatsoever is
required for calling the test command. You can also pass additional
arguments to the subprocess-calls such as your test directory or other

View File

@@ -4,6 +4,9 @@
pytest: helps you write better programs
=============================================
.. note:: Upcoming: `professional testing with pytest and tox <http://www.python-academy.com/courses/specialtopics/python_course_testing.html>`_ , 24th-26th June 2013, Leipzig.
**a mature full-featured Python testing tool**
- runs on Posix/Windows, Python 2.4-3.3, PyPy and Jython-2.5.1

View File

@@ -53,7 +53,7 @@ which will thus run three times::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 3 items
test_expectation.py ..F
@@ -135,8 +135,8 @@ Let's also run with a stringinput that will lead to a failing test::
def test_valid_string(stringinput):
> assert stringinput.isalpha()
E assert <built-in method isalpha of str object at 0x2b1792721fa8>()
E + where <built-in method isalpha of str object at 0x2b1792721fa8> = '!'.isalpha
E assert <built-in method isalpha of str object at 0x2ba729dab300>()
E + where <built-in method isalpha of str object at 0x2ba729dab300> = '!'.isalpha
test_strings.py:3: AssertionError
@@ -149,7 +149,7 @@ listlist::
$ py.test -q -rs test_strings.py
s
========================= short test summary info ==========================
SKIP [1] /home/hpk/p/pytest/.tox/regen/lib/python2.7/site-packages/_pytest/python.py:960: got empty parameter set, function test_valid_string at /tmp/doc-exec-26/test_strings.py:1
SKIP [1] /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:974: got empty parameter set, function test_valid_string at /tmp/doc-exec-240/test_strings.py:1
For further examples, you might want to look at :ref:`more
parametrization examples <paramexamples>`.

View File

@@ -8,6 +8,7 @@ Here are some examples of projects using py.test (please send notes via :ref:`co
* `PyPy <http://pypy.org>`_, Python with a JIT compiler, running over
`16000 tests <http://buildbot.pypy.org/summary?branch=%3Ctrunk%3E>`_
* the `MoinMoin <http://moinmo.in>`_ Wiki Engine
* `sentry <https://getsentry.com/welcome/>`_, realtime app-maintenance and exception tracking
* `tox <http://codespeak.net/tox>`_, virtualenv/Hudson integration tool
* `PIDA <http://pida.co.uk>`_ framework for integrated development
* `PyPM <http://code.activestate.com/pypm/>`_ ActiveState's package manager
@@ -18,7 +19,7 @@ Here are some examples of projects using py.test (please send notes via :ref:`co
* `mwlib <http://pypi.python.org/pypi/mwlib>`_ mediawiki parser and utility library
* `The Translate Toolkit <http://translate.sourceforge.net/wiki/toolkit/index>`_ for localization and conversion
* `execnet <http://codespeak.net/execnet>`_ rapid multi-Python deployment
* `pylib <http://pylib.org>`_ cross-platform path, IO, dynamic code library
* `pylib <http://py.rtfd.org>`_ cross-platform path, IO, dynamic code library
* `Pacha <http://pacha.cafepais.com/>`_ configuration management in five minutes
* `bbfreeze <http://pypi.python.org/pypi/bbfreeze>`_ create standalone executables from Python scripts
* `pdb++ <http://bitbucket.org/antocuni/pdb>`_ a fancier version of PDB

View File

@@ -132,7 +132,7 @@ Running it with the report-on-xfail option gives this output::
example $ py.test -rx xfail_demo.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 6 items
xfail_demo.py xxxxxx

View File

@@ -4,6 +4,8 @@ Talks and Tutorials
.. _`funcargs`: funcargs.html
.. note:: Upcoming: `professional testing with pytest and tox <`http://www.python-academy.com/courses/specialtopics/python_course_testing.html>`_ , 24th-26th June 2013, Leipzig.
Tutorial examples and blog postings
---------------------------------------------
@@ -12,6 +14,9 @@ Tutorial examples and blog postings
Basic usage and funcargs:
- `pytest introduction from Brian Okken (January 2013)
<http://pythontesting.net/framework/pytest-introduction/>`_
- `pycon australia 2012 pytest talk from Brianna Laugher
<http://2012.pycon-au.org/schedule/52/view_talk?day=sunday>`_ (`video <http://www.youtube.com/watch?v=DTNejE9EraI>`_, `slides <http://www.slideshare.net/pfctdayelise/funcargs-other-fun-with-pytest>`_, `code <https://gist.github.com/3386951>`_)
- `pycon 2012 US talk video from Holger Krekel <http://www.youtube.com/watch?v=9LVqBQcFmyw>`_
@@ -57,7 +62,7 @@ Plugin specific examples:
.. _`generating parametrized tests with funcargs`: funcargs.html#test-generators
.. _`test generators and cached setup`: http://bruynooghe.blogspot.com/2010/06/pytest-test-generators-and-cached-setup.html
Conference talks and tutorials
Older conference talks and tutorials
----------------------------------------
- `ep2009-rapidtesting.pdf`_ tutorial slides (July 2009):

View File

@@ -29,7 +29,7 @@ Running this would result in a passed test except for the last
$ py.test test_tmpdir.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 1 items
test_tmpdir.py F
@@ -37,7 +37,7 @@ Running this would result in a passed test except for the last
================================= FAILURES =================================
_____________________________ test_create_file _____________________________
tmpdir = local('/tmp/pytest-595/test_create_file0')
tmpdir = local('/tmp/pytest-323/test_create_file0')
def test_create_file(tmpdir):
p = tmpdir.mkdir("sub").join("hello.txt")
@@ -48,7 +48,7 @@ Running this would result in a passed test except for the last
E assert 0
test_tmpdir.py:7: AssertionError
========================= 1 failed in 0.03 seconds =========================
========================= 1 failed in 0.02 seconds =========================
.. _`base temporary directory`:
@@ -68,6 +68,6 @@ When distributing tests on the local machine, ``py.test`` takes care to
configure a basetemp directory for the sub processes such that all temporary
data lands below a single per-test run basetemp directory.
.. _`py.path.local`: http://pylib.org/path.html
.. _`py.path.local`: http://py.rtfd.org/path.html

View File

@@ -88,7 +88,7 @@ the ``self.db`` values in the traceback::
$ py.test test_unittest_db.py
=========================== test session starts ============================
platform linux2 -- Python 2.7.3 -- pytest-2.3.3
platform linux2 -- Python 2.7.3 -- pytest-2.3.5
collected 2 items
test_unittest_db.py FF
@@ -101,7 +101,7 @@ the ``self.db`` values in the traceback::
def test_method1(self):
assert hasattr(self, "db")
> assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.DummyDB instance at 0x269e5a8>
E AssertionError: <conftest.DummyDB instance at 0x19fdf38>
test_unittest_db.py:9: AssertionError
___________________________ MyTest.test_method2 ____________________________
@@ -110,7 +110,7 @@ the ``self.db`` values in the traceback::
def test_method2(self):
> assert 0, self.db # fail for demo purposes
E AssertionError: <conftest.DummyDB instance at 0x269e5a8>
E AssertionError: <conftest.DummyDB instance at 0x19fdf38>
test_unittest_db.py:12: AssertionError
========================= 2 failed in 0.02 seconds =========================

View File

@@ -133,7 +133,7 @@ by the `PyPy-test`_ web page to show test results over several revisions.
.. _`PyPy-test`: http://buildbot.pypy.org/summary
Sending test report to pocoo pastebin service
Sending test report to online pastebin service
-----------------------------------------------------
**Creating a URL for each test failure**::
@@ -165,7 +165,7 @@ this acts as if you would call "py.test" from the command line.
It will not raise ``SystemExit`` but return the exitcode instead.
You can pass in options and arguments::
pytest.main(['x', 'mytestdir'])
pytest.main(['-x', 'mytestdir'])
or pass in a string::

View File

@@ -29,6 +29,8 @@ help:
@echo " devhelp to make HTML files and a Devhelp project"
@echo " epub to make an epub"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " texinfo to make Texinfo files"
@echo " info to make Texinfo files and run them through makeinfo"
@echo " latexpdf to make LaTeX files and run them through pdflatex"
@echo " text to make text files"
@echo " man to make manual pages"
@@ -140,3 +142,18 @@ doctest:
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
@echo "Testing of doctests in the sources finished, look at the " \
"results in $(BUILDDIR)/doctest/output.txt."
texinfo:
mkdir -p $(BUILDDIR)/texinfo
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo
@echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo."
@echo "Run \`make' in that directory to run these through makeinfo" \
"(use \`make info' here to do that automatically)."
info:
mkdir -p $(BUILDDIR)/texinfo
$(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo
@echo "Running Texinfo files through makeinfo..."
make -C $(BUILDDIR)/texinfo info
@echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo."

View File

@@ -261,6 +261,19 @@ epub_copyright = u'2011, holger krekel et alii'
#epub_tocdup = True
# -- Options for texinfo output ------------------------------------------------
texinfo_documents = [
(master_doc, 'pytest', 'pytest Documentation',
('Holger Krekel@*Benjamin Peterson@*Ronny Pfannschmidt@*'
'Floris Bruynooghe@*others'),
'pytest',
'simple powerful testing with Pytho',
'Programming',
1),
]
# Example configuration for intersphinx: refer to the Python standard library.
intersphinx_mapping = {} # 'http://docs.python.org/': None}
def setup(app):

View File

@@ -72,3 +72,12 @@ Python モジュール (通常 python テストモジュールを含む) の doc
mymodule.py .
========================= 1 passed in 0.02 seconds =========================
..
It is possible to use fixtures using the ``getfixture`` helper::
それは ``getfixture`` ヘルパーを使ってフィクスチャを使用することが可能である::
# content of example.rst
>>> tmp = getfixture('tmpdir')
>>> ...

View File

@@ -26,7 +26,7 @@ py.test を使っているプロジェクトを紹介します (:ref:`contact`
* `mwlib <http://pypi.python.org/pypi/mwlib>`_ mediawiki parser and utility library
* `The Translate Toolkit <http://translate.sourceforge.net/wiki/toolkit/index>`_ for localization and conversion
* `execnet <http://codespeak.net/execnet>`_ rapid multi-Python deployment
* `pylib <http://pylib.org>`_ cross-platform path, IO, dynamic code library
* `pylib <http://py.rtfd.org>`_ cross-platform path, IO, dynamic code library
* `Pacha <http://pacha.cafepais.com/>`_ configuration management in five minutes
* `bbfreeze <http://pypi.python.org/pypi/bbfreeze>`_ create standalone executables from Python scripts
* `pdb++ <http://bitbucket.org/antocuni/pdb>`_ a fancier version of PDB
@@ -59,7 +59,7 @@ py.test を使っているプロジェクトを紹介します (:ref:`contact`
* `mwlib <http://pypi.python.org/pypi/mwlib>`_: mediawiki のパーサーとユーティリティライブラリ
* `The Translate Toolkit <http://translate.sourceforge.net/wiki/toolkit/index>`_: ローカライズと変換
* `execnet <http://codespeak.net/execnet>`_: 高速な multi-Python デプロイ
* `pylib <http://pylib.org>`_: クロスプラットフォームのパス、IO、動的コードライブラリ
* `pylib <http://py.rtfd.org>`_: クロスプラットフォームのパス、IO、動的コードライブラリ
* `Pacha <http://pacha.cafepais.com/>`_: 5分でできる構成管理
* `bbfreeze <http://pypi.python.org/pypi/bbfreeze>`_: Python スクリプトから単独で実行できる実行可能ファイルの作成
* `pdb++ <http://bitbucket.org/antocuni/pdb>`_: PDB の手の込んだバージョン

View File

@@ -97,6 +97,6 @@
``py.test`` は、ローカルマシン上で分散テストを行うとき、全ての一時データが basetemp ディレクトリの配下で実行されてテスト毎に一意になるよう、サブプロセスに対しても basetemp ディレクトリをちゃんと設定します。
.. _`py.path.local`: http://pylib.org/path.html
.. _`py.path.local`: http://py.rtfd.org/path.html

View File

@@ -276,7 +276,7 @@ Python コードから直接 ``py.test`` を呼び出せます::
これはコマンドラインから "py.test" を呼び出すように動作します。 ``SystemExit`` を発生させない代わりに終了コードを返します。次のようにオプションと引数を渡します::
pytest.main(['x', 'mytestdir'])
pytest.main(['-x', 'mytestdir'])
..
or pass in a string::

View File

@@ -6,49 +6,13 @@ except ImportError:
use_setuptools()
from setuptools import setup, Command
long_description = """
The `py.test`` testing tool makes it easy to write small tests, yet
scales to support complex functional testing. It provides
- `auto-discovery
<http://pytest.org/latest/goodpractises.html#python-test-discovery>`_
of test modules and functions,
- detailed info on failing `assert statements <http://pytest.org/latest/assert.html>`_ (no need to remember ``self.assert*`` names)
- `modular fixtures <http://pytest.org/latest/fixture.html>`_ for
managing small or parametrized long-lived test resources.
- multi-paradigm support: you can use ``py.test`` to run test suites based
on `unittest <http://pytest.org/latest/unittest.html>`_ (or trial),
`nose <http://pytest.org/latest/nose.html>`_
- single-source compatibility to Python2.4 all the way up to Python3.3,
PyPy and Jython.
- many `external plugins <http://pytest.org/latest/plugins.html#installing-external-plugins-searching>`_.
A simple example for a test::
# content of test_module.py
def test_function():
i = 4
assert i == 3
which can be run with ``py.test test_module.py``. See `getting-started <http://pytest.org/latest/getting-started.html#our-first-test-run>`_ for more examples.
For much more info, including PDF docs, see
http://pytest.org
and report bugs at:
http://bitbucket.org/hpk42/pytest/issues/
(c) Holger Krekel and others, 2004-2012
"""
long_description = open("README.rst").read()
def main():
setup(
name='pytest',
description='py.test: simple powerful testing with Python',
long_description = long_description,
version='2.3.3',
version='2.3.5',
url='http://pytest.org',
license='MIT license',
platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'],
@@ -57,7 +21,7 @@ def main():
entry_points= make_entry_points(),
cmdclass = {'test': PyTest},
# the following should be enabled for release
install_requires=['py>=1.4.12'],
install_requires=['py>=1.4.13dev6'],
classifiers=['Development Status :: 6 - Mature',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',

View File

@@ -294,6 +294,21 @@ class TestGeneralUsage:
])
assert 'sessionstarttime' not in result.stderr.str()
@pytest.mark.parametrize('lookfor', ['test_fun.py', 'test_fun.py::test_a'])
def test_issue134_report_syntaxerror_when_collecting_member(self, testdir, lookfor):
testdir.makepyfile(test_fun="""
def test_a():
pass
def""")
result = testdir.runpytest(lookfor)
result.stdout.fnmatch_lines(['*SyntaxError*'])
if '::' in lookfor:
result.stderr.fnmatch_lines([
'*ERROR*',
])
assert result.ret == 4 # usage error only if item not found
class TestInvocationVariants:
def test_earlyinit(self, testdir):
p = testdir.makepyfile("""
@@ -490,6 +505,8 @@ class TestDurations:
source = """
import time
frag = 0.02
def test_something():
pass
def test_2():
time.sleep(frag*5)
def test_1():

View File

@@ -66,7 +66,8 @@ def pytest_generate_tests(metafunc):
metafunc.addcall(funcargs={name: val})
elif 'anypython' in metafunc.fixturenames:
for name in ('python2.4', 'python2.5', 'python2.6',
'python2.7', 'python3.1', 'pypy', 'jython'):
'python2.7', 'python3.2', "python3.3",
'pypy', 'jython'):
metafunc.addcall(id=name, param=name)
# XXX copied from execnet's conftest.py - needs to be merged

View File

@@ -35,7 +35,7 @@ class TestModule:
pytest.raises(ImportError, "modcol.obj")
class TestClass:
def test_class_with_init_not_collected(self, testdir):
def test_class_with_init_skip_collect(self, testdir):
modcol = testdir.getmodulecol("""
class TestClass1:
def __init__(self):
@@ -45,7 +45,10 @@ class TestClass:
pass
""")
l = modcol.collect()
assert len(l) == 0
assert len(l) == 2
for classcol in l:
pytest.raises(pytest.skip.Exception, classcol.collect)
def test_class_subclassobject(self, testdir):
testdir.getmodulecol("""
@@ -659,6 +662,28 @@ def test_customized_python_discovery(testdir):
"*2 passed*",
])
def test_customized_python_discovery_functions(testdir):
testdir.makeini("""
[pytest]
python_functions=_test
""")
p = testdir.makepyfile("""
def _test_underscore():
pass
""")
result = testdir.runpytest("--collectonly", "-s")
result.stdout.fnmatch_lines([
"*_test_underscore*",
])
result = testdir.runpytest()
assert result.ret == 0
result.stdout.fnmatch_lines([
"*1 passed*",
])
def test_collector_attributes(testdir):
testdir.makeconftest("""
import pytest

View File

@@ -948,7 +948,44 @@ class TestAutouseDiscovery:
reprec = testdir.inline_run()
reprec.assertoutcome(passed=3)
class TestAutouseManagement:
def test_autouse_conftest_mid_directory(self, testdir):
pkgdir = testdir.mkpydir("xyz123")
pkgdir.join("conftest.py").write(py.code.Source("""
import pytest
@pytest.fixture(autouse=True)
def app():
import sys
sys._myapp = "hello"
"""))
t = pkgdir.ensure("tests", "test_app.py")
t.write(py.code.Source("""
import sys
def test_app():
assert sys._myapp == "hello"
"""))
reprec = testdir.inline_run("-s")
reprec.assertoutcome(passed=1)
def test_autouse_honored_for_yield(self, testdir):
testdir.makepyfile("""
import pytest
@pytest.fixture(autouse=True)
def tst():
global x
x = 3
def test_gen():
def f(hello):
assert x == abs(hello)
yield f, 3
yield f, -3
""")
reprec = testdir.inline_run()
reprec.assertoutcome(passed=2)
def test_funcarg_and_setup(self, testdir):
testdir.makepyfile("""
import pytest
@@ -1105,7 +1142,7 @@ class TestAutouseManagement:
reprec = testdir.inline_run()
reprec.assertoutcome(passed=5)
def test_setup_funcarg_order(self, testdir):
def test_ordering_autouse_before_explicit(self, testdir):
testdir.makepyfile("""
import pytest
@@ -1122,6 +1159,30 @@ class TestAutouseManagement:
reprec = testdir.inline_run()
reprec.assertoutcome(passed=1)
@pytest.mark.issue226
@pytest.mark.parametrize("param1", ["", "params=[1]"], ids=["p00","p01"])
@pytest.mark.parametrize("param2", ["", "params=[1]"], ids=["p10","p11"])
def test_ordering_dependencies_torndown_first(self, testdir, param1, param2):
testdir.makepyfile("""
import pytest
l = []
@pytest.fixture(%(param1)s)
def arg1(request):
request.addfinalizer(lambda: l.append("fin1"))
l.append("new1")
@pytest.fixture(%(param2)s)
def arg2(request, arg1):
request.addfinalizer(lambda: l.append("fin2"))
l.append("new2")
def test_arg(arg2):
pass
def test_check():
assert l == ["new1", "new2", "fin2", "fin1"]
""" % locals())
reprec = testdir.inline_run("-s")
reprec.assertoutcome(passed=2)
class TestFixtureMarker:
def test_parametrize(self, testdir):
testdir.makepyfile("""
@@ -1580,6 +1641,20 @@ class TestFixtureMarker:
reprec = testdir.inline_run("-v")
reprec.assertoutcome(passed=6)
def test_fixture_marked_function_not_collected_as_test(self, testdir):
testdir.makepyfile("""
import pytest
@pytest.fixture
def test_app():
return 1
def test_something(test_app):
assert test_app == 1
""")
reprec = testdir.inline_run()
reprec.assertoutcome(passed=1)
class TestRequestScopeAccess:
pytestmark = pytest.mark.parametrize(("scope", "ok", "error"),[
["session", "", "fspath class function module"],

View File

@@ -117,3 +117,35 @@ class TestMockDecoration:
""")
reprec = testdir.inline_run()
reprec.assertoutcome(passed=2)
class TestReRunTests:
def test_rerun(self, testdir):
testdir.makeconftest("""
from _pytest.runner import runtestprotocol
def pytest_runtest_protocol(item, nextitem):
runtestprotocol(item, log=False, nextitem=nextitem)
runtestprotocol(item, log=True, nextitem=nextitem)
""")
testdir.makepyfile("""
import pytest
count = 0
req = None
@pytest.fixture
def fix(request):
global count, req
assert request != req
req = request
print ("fix count %s" % count)
count += 1
def test_fix(fix):
pass
""")
result = testdir.runpytest("-s")
result.stdout.fnmatch_lines("""
*fix count 0*
*fix count 1*
""")
result.stdout.fnmatch_lines("""
*2 passed*
""")

View File

@@ -1,3 +1,4 @@
import pytest, py, sys
from _pytest import python as funcargs
from _pytest.python import FixtureLookupError
@@ -106,6 +107,7 @@ class TestMetafunc:
assert metafunc._calls[2].id == "x1-a"
assert metafunc._calls[3].id == "x1-b"
@pytest.mark.issue250
def test_idmaker_autoname(self):
from _pytest.python import idmaker
result = idmaker(("a", "b"), [("string", 1.0),
@@ -115,6 +117,9 @@ class TestMetafunc:
result = idmaker(("a", "b"), [(object(), 1.0),
(object(), object())])
assert result == ["a0-1.0", "a1-b1"]
# unicode mixing, issue250
result = idmaker((py.builtin._totext("a"), "b"), [({}, '\xc3\xb4')])
assert result == ['a0-\xc3\xb4']
def test_addcall_and_parametrize(self):

View File

@@ -6,6 +6,18 @@ from _pytest.assertion import reinterpret, util
needsnewassert = pytest.mark.skipif("sys.version_info < (2,6)")
@pytest.fixture
def mock_config():
class Config(object):
verbose = False
def getoption(self, name):
if name == 'verbose':
return self.verbose
raise KeyError('Not mocked out: %s' % name)
return Config()
def interpret(expr):
return reinterpret.reinterpret(expr, py.code.Frame(sys._getframe(1)))
@@ -32,8 +44,11 @@ class TestBinReprIntegration:
"*test_check*PASS*",
])
def callequal(left, right):
return plugin.pytest_assertrepr_compare('==', left, right)
def callequal(left, right, verbose=False):
config = mock_config()
config.verbose = verbose
return plugin.pytest_assertrepr_compare(config, '==', left, right)
class TestAssert_reprcompare:
def test_different_types(self):
@@ -48,6 +63,17 @@ class TestAssert_reprcompare:
assert '- spam' in diff
assert '+ eggs' in diff
def test_text_skipping(self):
lines = callequal('a'*50 + 'spam', 'a'*50 + 'eggs')
assert 'Skipping' in lines[1]
for line in lines:
assert 'a'*50 not in line
def test_text_skipping_verbose(self):
lines = callequal('a'*50 + 'spam', 'a'*50 + 'eggs', verbose=True)
assert '- ' + 'a'*50 + 'spam' in lines
assert '+ ' + 'a'*50 + 'eggs' in lines
def test_multiline_text_diff(self):
left = 'foo\nspam\nbar'
right = 'foo\neggs\nbar'
@@ -73,6 +99,11 @@ class TestAssert_reprcompare:
expl = callequal(set([0, 1]), set([0, 2]))
assert len(expl) > 1
def test_frozenzet(self):
expl = callequal(frozenset([0, 1]), set([0, 2]))
print (expl)
assert len(expl) > 1
def test_list_tuples(self):
expl = callequal([], [(1,2)])
assert len(expl) > 1
@@ -103,6 +134,19 @@ class TestAssert_reprcompare:
expl = ' '.join(callequal('foo', 'bar'))
assert 'raised in repr()' not in expl
def test_python25_compile_issue257(testdir):
testdir.makepyfile("""
def test_rewritten():
assert 1 == 2
# some comment
""")
result = testdir.runpytest()
assert result.ret == 1
result.stdout.fnmatch_lines("""
*E*assert 1 == 2*
*1 failed*
""")
@needsnewassert
def test_rewritten(testdir):
testdir.makepyfile("""
@@ -111,8 +155,9 @@ def test_rewritten(testdir):
""")
assert testdir.runpytest().ret == 0
def test_reprcompare_notin():
detail = plugin.pytest_assertrepr_compare('not in', 'foo', 'aaafoobbb')[1:]
def test_reprcompare_notin(mock_config):
detail = plugin.pytest_assertrepr_compare(
mock_config, 'not in', 'foo', 'aaafoobbb')[1:]
assert detail == ["'foo' is contained here:", ' aaafoobbb', '? +++']
@needsnewassert
@@ -164,7 +209,7 @@ def test_assert_compare_truncate_longmessage(testdir):
result = testdir.runpytest()
result.stdout.fnmatch_lines([
"*too verbose, truncated*",
"*truncated*use*-vv*",
])
@@ -275,3 +320,17 @@ def test_warn_missing(testdir):
result.stderr.fnmatch_lines([
"*WARNING*assert statements are not executed*",
])
def test_recursion_source_decode(testdir):
testdir.makepyfile("""
def test_something():
pass
""")
testdir.makeini("""
[pytest]
python_files = *.py
""")
result = testdir.runpytest("--collectonly")
result.stdout.fnmatch_lines("""
<Module*>
""")

View File

@@ -107,7 +107,15 @@ class TestAssertionRewrite:
assert getmsg(f) == "assert False"
def f():
assert a_global
assert getmsg(f, {"a_global" : False}) == "assert a_global"
assert getmsg(f, {"a_global" : False}) == "assert False"
def f():
assert sys == 42
assert getmsg(f, {"sys" : sys}) == "assert sys == 42"
def f():
assert cls == 42
class X(object):
pass
assert getmsg(f, {"cls" : X}) == "assert cls == 42"
def test_assert_already_has_message(self):
def f():
@@ -232,7 +240,7 @@ class TestAssertionRewrite:
def test_attribute(self):
class X(object):
g = 3
ns = {"X" : X, "x" : X()}
ns = {"x" : X}
def f():
assert not x.g
assert getmsg(f, ns) == """assert not 3
@@ -386,3 +394,11 @@ def test_rewritten():
b = content.encode("utf-8")
testdir.tmpdir.join("test_newlines.py").write(b, "wb")
assert testdir.runpytest().ret == 0
@pytest.mark.skipif("sys.version_info[0] >= 3")
def test_assume_ascii(self, testdir):
content = "u'\xe2\x99\xa5'"
testdir.tmpdir.join("test_encoding.py").write(content, "wb")
res = testdir.runpytest()
assert res.ret != 0
assert "SyntaxError: Non-ASCII character" in res.stdout.str()

View File

@@ -437,6 +437,18 @@ class TestCaptureFixture:
])
assert result.ret == 2
@pytest.mark.xfail("sys.version_info < (2,7)")
@pytest.mark.issue14
def test_capture_and_logging(self, testdir):
p = testdir.makepyfile("""
import logging
def test_log(capsys):
logging.error('x')
""")
result = testdir.runpytest(p)
assert 'closed' not in result.stderr.str()
def test_setup_failure_does_not_kill_capturing(testdir):
sub1 = testdir.mkpydir("sub1")
sub1.join("conftest.py").write(py.code.Source("""

View File

@@ -315,3 +315,8 @@ def test_cmdline_processargs_simple(testdir):
"*-h*",
])
@pytest.mark.skipif("sys.platform == 'win32'")
def test_toolongargs_issue224(testdir):
result = testdir.runpytest("-m", "hello" * 500)
assert result.ret == 0

View File

@@ -612,6 +612,19 @@ class TestTracer:
assert names == ['hello', ' line1', ' line2',
' line3', ' line4', ' line5', 'last']
def test_readable_output_dictargs(self):
from _pytest.core import TagTracer
rootlogger = TagTracer()
out = rootlogger.format_message(['test'], [1])
assert out == ['1 [test]\n']
out2= rootlogger.format_message(['test'], ['test', {'a':1}])
assert out2 ==[
'test [test]\n',
' a: 1\n'
]
def test_setprocessor(self):
from _pytest.core import TagTracer
rootlogger = TagTracer()

View File

@@ -59,6 +59,26 @@ class TestDoctests:
"*UNEXPECTED*ZeroDivision*",
])
def test_doctest_linedata_missing(self, testdir):
testdir.tmpdir.join('hello.py').write(py.code.Source("""
class Fun(object):
@property
def test(self):
'''
>>> a = 1
>>> 1/0
'''
"""))
result = testdir.runpytest("--doctest-modules")
result.stdout.fnmatch_lines([
"*hello*",
"*EXAMPLE LOCATION UNKNOWN, not showing all tests of that example*",
"*1/0*",
"*UNEXPECTED*ZeroDivision*",
"*1 failed*",
])
def test_doctest_unex_importerror(self, testdir):
testdir.tmpdir.join("hello.py").write(py.code.Source("""
import asdalsdkjaslkdjasd
@@ -124,3 +144,23 @@ class TestDoctests:
" 1",
"*test_txtfile_failing.txt:2: DocTestFailure"
])
def test_txtfile_with_fixtures(self, testdir):
p = testdir.maketxtfile("""
>>> dir = getfixture('tmpdir')
>>> type(dir).__name__
'LocalPath'
""")
reprec = testdir.inline_run(p, )
reprec.assertoutcome(passed=1)
def test_doctestmodule_with_fixtures(self, testdir):
p = testdir.makepyfile("""
'''
>>> dir = getfixture('tmpdir')
>>> type(dir).__name__
'LocalPath'
'''
""")
reprec = testdir.inline_run(p, "--doctest-modules")
reprec.assertoutcome(passed=1)

View File

@@ -3,7 +3,8 @@ import subprocess
def pytest_funcarg__standalone(request):
return request.cached_setup(scope="module", setup=lambda: Standalone(request))
return request.cached_setup(scope="module",
setup=lambda: Standalone(request))
class Standalone:
def __init__(self, request):

View File

@@ -76,5 +76,6 @@ def test_PYTEST_DEBUG(testdir, monkeypatch):
result = testdir.runpytest()
assert result.ret == 0
result.stderr.fnmatch_lines([
"*registered*PluginManager*"
"*pytest_plugin_registered*",
"*manager*PluginManager*"
])

View File

@@ -282,12 +282,53 @@ class TestPython:
if not sys.platform.startswith("java"):
assert "hx" in fnode.toxml()
def test_pass_captures_stdout(self, testdir):
testdir.makepyfile("""
def test_pass():
print('hello-stdout')
""")
result, dom = runandparse(testdir)
node = dom.getElementsByTagName("testsuite")[0]
pnode = node.getElementsByTagName("testcase")[0]
systemout = pnode.getElementsByTagName("system-out")[0]
assert "hello-stdout" in systemout.toxml()
def test_pass_captures_stderr(self, testdir):
testdir.makepyfile("""
import sys
def test_pass():
sys.stderr.write('hello-stderr')
""")
result, dom = runandparse(testdir)
node = dom.getElementsByTagName("testsuite")[0]
pnode = node.getElementsByTagName("testcase")[0]
systemout = pnode.getElementsByTagName("system-err")[0]
assert "hello-stderr" in systemout.toxml()
def test_mangle_testnames():
from _pytest.junitxml import mangle_testnames
names = ["a/pything.py", "Class", "()", "method"]
newnames = mangle_testnames(names)
assert newnames == ["a.pything", "Class", "method"]
def test_dont_configure_on_slaves(tmpdir):
gotten = []
class FakeConfig:
def __init__(self):
self.pluginmanager = self
self.option = self
junitprefix = None
#XXX: shouldnt need tmpdir ?
xmlpath = str(tmpdir.join('junix.xml'))
register = gotten.append
fake_config = FakeConfig()
from _pytest import junitxml
junitxml.pytest_configure(fake_config)
assert len(gotten) == 1
FakeConfig.slaveinput = None
junitxml.pytest_configure(fake_config)
assert len(gotten) == 1
class TestNonPython:
def test_summing_simple(self, testdir):

View File

@@ -137,6 +137,48 @@ def test_mark_option(spec, testdir):
assert len(passed) == len(passed_result)
assert list(passed) == list(passed_result)
@pytest.mark.multi(spec=[
("interface", ("test_interface",)),
("not interface", ("test_nointer",)),
])
def test_mark_option_custom(spec, testdir):
testdir.makeconftest("""
import pytest
def pytest_collection_modifyitems(items):
for item in items:
if "interface" in item.nodeid:
item.keywords["interface"] = pytest.mark.interface
""")
testdir.makepyfile("""
def test_interface():
pass
def test_nointer():
pass
""")
opt, passed_result = spec
rec = testdir.inline_run("-m", opt)
passed, skipped, fail = rec.listoutcomes()
passed = [x.nodeid.split("::")[-1] for x in passed]
assert len(passed) == len(passed_result)
assert list(passed) == list(passed_result)
@pytest.mark.multi(spec=[
("interface", ("test_interface",)),
("not interface", ("test_nointer",)),
])
def test_keyword_option_custom(spec, testdir):
testdir.makepyfile("""
def test_interface():
pass
def test_nointer():
pass
""")
opt, passed_result = spec
rec = testdir.inline_run("-k", opt)
passed, skipped, fail = rec.listoutcomes()
passed = [x.nodeid.split("::")[-1] for x in passed]
assert len(passed) == len(passed_result)
assert list(passed) == list(passed_result)
class TestFunctional:
@@ -342,11 +384,11 @@ class TestKeywordSelection:
for keyword in ['test_one', 'est_on']:
#yield check, keyword, 'test_one'
check(keyword, 'test_one')
check('TestClass.test', 'test_method_one')
check('TestClass and test', 'test_method_one')
@pytest.mark.parametrize("keyword", [
'xxx', 'xxx test_2', 'TestClass', 'xxx -test_1',
'TestClass test_2', 'xxx TestClass test_2'])
'xxx', 'xxx and test_2', 'TestClass', 'xxx and -test_1',
'TestClass and test_2', 'xxx and TestClass and test_2'])
def test_select_extra_keywords(self, testdir, keyword):
p = testdir.makepyfile(test_select="""
def test_1():
@@ -386,7 +428,6 @@ class TestKeywordSelection:
item = dlist[0].items[0]
assert item.name == "test_one"
def test_keyword_extra(self, testdir):
p = testdir.makepyfile("""
def test_one():

View File

@@ -280,3 +280,53 @@ def test_nose_setup_ordering(testdir):
result.stdout.fnmatch_lines([
"*1 passed*",
])
def test_apiwrapper_problem_issue260(testdir):
# this would end up trying a call a optional teardown on the class
# for plain unittests we dont want nose behaviour
testdir.makepyfile("""
import unittest
class TestCase(unittest.TestCase):
def setup(self):
#should not be called in unittest testcases
assert 0, 'setup'
def teardown(self):
#should not be called in unittest testcases
assert 0, 'teardown'
def setUp(self):
print('setup')
def tearDown(self):
print('teardown')
def test_fun(self):
pass
""")
result = testdir.runpytest()
result.stdout.fnmatch_lines("*1 passed*")
@pytest.mark.skipif("sys.version_info < (2,6)")
def test_setup_teardown_linking_issue265(testdir):
# we accidentally didnt integrate nose setupstate with normal setupstate
# this test ensures that won't happen again
testdir.makepyfile('''
import pytest
class TestGeneric(object):
def test_nothing(self):
"""Tests the API of the implementation (for generic and specialized)."""
@pytest.mark.skipif("True", reason=
"Skip tests to check if teardown is skipped as well.")
class TestSkipTeardown(TestGeneric):
def setup(self):
"""Sets up my specialized implementation for $COOL_PLATFORM."""
raise Exception("should not call setup for skipped tests")
def teardown(self):
"""Undoes the setup."""
raise Exception("should not call teardown for skipped tests")
''')
result = testdir.runpytest()
result.stdout.fnmatch_lines("*1 skipped*")

View File

@@ -106,6 +106,22 @@ class TestPDB:
if child.isalive():
child.wait()
def test_pdb_and_capsys(self, testdir):
p1 = testdir.makepyfile("""
import pytest
def test_1(capsys):
print ("hello1")
pytest.set_trace()
""")
child = testdir.spawn_pytest(str(p1))
child.expect("test_1")
child.send("capsys.readouterr()\n")
child.expect("hello1")
child.sendeof()
rest = child.read()
if child.isalive():
child.wait()
def test_pdb_interaction_doctest(self, testdir):
p1 = testdir.makepyfile("""
import pytest

View File

@@ -378,6 +378,10 @@ def test_runtest_in_module_ordering(testdir):
])
def test_outcomeexception_exceptionattributes():
outcome = runner.OutcomeException('test')
assert outcome.args[0] == outcome.msg
def test_pytest_exit():
try:
py.test.exit("hello")

View File

@@ -39,6 +39,20 @@ class TestEvaluator:
expl = ev.getexplanation()
assert expl == "condition: hasattr(os, 'sep')"
@pytest.mark.skipif('sys.version_info[0] >= 3')
def test_marked_one_arg_unicode(self, testdir):
item = testdir.getitem("""
import pytest
@pytest.mark.xyz(u"hasattr(os, 'sep')")
def test_func():
pass
""")
ev = MarkEvaluator(item, 'xyz')
assert ev
assert ev.istrue()
expl = ev.getexplanation()
assert expl == "condition: hasattr(os, 'sep')"
def test_marked_one_arg_with_reason(self, testdir):
item = testdir.getitem("""
import pytest

View File

@@ -1,7 +1,7 @@
import py, pytest
import os
from _pytest.tmpdir import pytest_funcarg__tmpdir, TempdirHandler
from _pytest.tmpdir import tmpdir, TempdirHandler
def test_funcarg(testdir):
testdir.makepyfile("""
@@ -16,12 +16,13 @@ def test_funcarg(testdir):
# pytest_unconfigure has deleted the TempdirHandler already
config = item.config
config._tmpdirhandler = TempdirHandler(config)
p = pytest_funcarg__tmpdir(item)
item._initrequest()
p = tmpdir(item._request)
assert p.check()
bn = p.basename.strip("0123456789")
assert bn.endswith("test_func_a_")
item.name = "qwe/\\abc"
p = pytest_funcarg__tmpdir(item)
p = tmpdir(item._request)
assert p.check()
bn = p.basename.strip("0123456789")
assert bn == "qwe__abc"

22
tox.ini
View File

@@ -1,10 +1,10 @@
[tox]
distshare={homedir}/.tox/distshare
envlist=py24,py26,py27,py27-nobyte,py31,py32,py33,py27-xdist,py25,trial
envlist=py25,py26,py27,py27-nobyte,py32,py33,py27-xdist,trial
indexserver=
pypi = http://pypi.python.org/simple
pypi = https://pypi.python.org/simple
testrun = http://pypi.testrun.org
#default = http://pypi.testrun.org
default = http://pypi.testrun.org
[testenv]
changedir=testing
@@ -39,7 +39,6 @@ commands=
[testenv:trial]
changedir=.
basepython=python2.6
deps=:pypi:twisted
:pypi:pexpect
commands=
@@ -50,6 +49,13 @@ changedir=.
commands=py.test --doctest-modules _pytest
deps=
[testenv:py32]
deps=
:pypi:nose
[testenv:py33]
deps=
:pypi:nose
[testenv:doc]
basepython=python
@@ -68,6 +74,7 @@ deps=:pypi:sphinx
:pypi:PyYAML
commands=
rm -rf /tmp/doc-exec*
#pip install pytest==2.3.4
make regen
[testenv:py31]
@@ -79,13 +86,6 @@ commands=
py.test -n3 -rfsxX \
--junitxml={envlogdir}/junit-{envname}.xml []
[testenv:py32]
deps=py>=1.4.0
[testenv:py33]
deps=py>=1.4.0
:pypi:nose
[testenv:jython]
changedir=testing
commands=