Compare commits
303 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a5ce481022 | ||
|
|
dca5fa2241 | ||
|
|
586befb945 | ||
|
|
b0b6695538 | ||
|
|
024df6e00b | ||
|
|
5e28f461c8 | ||
|
|
64544bee1a | ||
|
|
7c8755cc89 | ||
|
|
7d747a1cde | ||
|
|
dbaedbacde | ||
|
|
cf17f1d628 | ||
|
|
67de2c53ac | ||
|
|
26ab80c4cd | ||
|
|
20849a44f5 | ||
|
|
51644a116c | ||
|
|
98513b995a | ||
|
|
dc4e205876 | ||
|
|
2855a2f6cb | ||
|
|
cc2337af3a | ||
|
|
ab4183d400 | ||
|
|
37965657d0 | ||
|
|
ccaa1af534 | ||
|
|
2f3bbdafda | ||
|
|
021c087701 | ||
|
|
4541456a96 | ||
|
|
f5d796b093 | ||
|
|
6eec2f5893 | ||
|
|
0594265adc | ||
|
|
fb3af07ef4 | ||
|
|
39b8a19cf7 | ||
|
|
916c1c170e | ||
|
|
df643f65f0 | ||
|
|
d630d02c5b | ||
|
|
30b10a6950 | ||
|
|
cda84fb566 | ||
|
|
d3893dd5d1 | ||
|
|
55a8bfd174 | ||
|
|
f588eae4f5 | ||
|
|
d8c365ef2c | ||
|
|
4cbb2ab3b3 | ||
|
|
d1a3f5c3a6 | ||
|
|
bb07ba7807 | ||
|
|
8282efbb40 | ||
|
|
9251e747af | ||
|
|
439cc1238f | ||
|
|
3049af618c | ||
|
|
7bc7a9b702 | ||
|
|
5173647b4d | ||
|
|
57a832812b | ||
|
|
bee7543716 | ||
|
|
b9767fd74c | ||
|
|
dbe66f468a | ||
|
|
35cbb5791d | ||
|
|
a18fd61a20 | ||
|
|
a1c3d60747 | ||
|
|
fe4ccdff0e | ||
|
|
cd1ead4f7b | ||
|
|
9568ff3b23 | ||
|
|
6e5f491a42 | ||
|
|
7768972ec5 | ||
|
|
754fab9b55 | ||
|
|
253a87b2dc | ||
|
|
81082ed3d3 | ||
|
|
465cfff6f9 | ||
|
|
738f14a48a | ||
|
|
22dc47d9f9 | ||
|
|
6cb3281ddd | ||
|
|
a5e7e441d3 | ||
|
|
a7c6688bd6 | ||
|
|
d9c24552fc | ||
|
|
631d311e89 | ||
|
|
c2480f5c54 | ||
|
|
a94bb0a8bb | ||
|
|
646c2c6001 | ||
|
|
f6b555f5ad | ||
|
|
bf5b226474 | ||
|
|
084c617b67 | ||
|
|
bfaf8e50b6 | ||
|
|
848c749d1a | ||
|
|
41ad7dbae1 | ||
|
|
93eac240a0 | ||
|
|
a6060dfb6d | ||
|
|
7f36649763 | ||
|
|
f07ebc6615 | ||
|
|
e876ad9abd | ||
|
|
503addbf09 | ||
|
|
1318df4f5b | ||
|
|
45693c2847 | ||
|
|
0e8cd9297a | ||
|
|
0cca20bef9 | ||
|
|
1446b4b4e6 | ||
|
|
aa84359bd9 | ||
|
|
f275830ca7 | ||
|
|
627e068516 | ||
|
|
f472f21406 | ||
|
|
f4963270c6 | ||
|
|
08c3b1b80f | ||
|
|
935761f098 | ||
|
|
dd268c1b2b | ||
|
|
172505f703 | ||
|
|
6746a00cb8 | ||
|
|
46dc7eeacb | ||
|
|
ae241a5071 | ||
|
|
5fd84c35dd | ||
|
|
535d892f27 | ||
|
|
cb2eb9ba33 | ||
|
|
4f94ab4e42 | ||
|
|
449b55cc70 | ||
|
|
9dc79fd187 | ||
|
|
b57fb9fd47 | ||
|
|
d68c65b493 | ||
|
|
fa61927c6b | ||
|
|
d4a487c725 | ||
|
|
76584b53a1 | ||
|
|
6b0f0adf5b | ||
|
|
396045e53f | ||
|
|
80db25822c | ||
|
|
f358fe7154 | ||
|
|
e14459d45c | ||
|
|
4e4b507472 | ||
|
|
c7ee6e71ab | ||
|
|
4766497515 | ||
|
|
38b18c44e9 | ||
|
|
a73c27da13 | ||
|
|
dbaf7ee9d0 | ||
|
|
7a90bed19b | ||
|
|
8adac2878f | ||
|
|
66ed2d123a | ||
|
|
b902c36bfc | ||
|
|
099ac1e1f4 | ||
|
|
1aca6c9d7c | ||
|
|
fe24e01a03 | ||
|
|
838e758cf7 | ||
|
|
ddd4467fdd | ||
|
|
5574e45749 | ||
|
|
74e55493d1 | ||
|
|
ecec653e98 | ||
|
|
0ba0f91720 | ||
|
|
ea49993459 | ||
|
|
b4b86159cd | ||
|
|
91b6f2bda8 | ||
|
|
227d847216 | ||
|
|
6e0c30d67d | ||
|
|
65cbf591d8 | ||
|
|
e79a312b92 | ||
|
|
42d44bfd43 | ||
|
|
ccc04b9fc4 | ||
|
|
18306a4644 | ||
|
|
1bbe1d086c | ||
|
|
672919a8e2 | ||
|
|
f176ee3a1c | ||
|
|
474b177da8 | ||
|
|
b2e87ce027 | ||
|
|
2e163e4aae | ||
|
|
63eacd9dd5 | ||
|
|
b008e489ba | ||
|
|
d5078001c9 | ||
|
|
8b3ac3b03a | ||
|
|
6af20a5290 | ||
|
|
eb1b1005ae | ||
|
|
6fd57ec786 | ||
|
|
03a814a859 | ||
|
|
b4c2161e35 | ||
|
|
9198069739 | ||
|
|
4d77653bb0 | ||
|
|
3f17784386 | ||
|
|
971f96468c | ||
|
|
c11202b549 | ||
|
|
42d63832b7 | ||
|
|
f5f3fe54d5 | ||
|
|
76ec623b22 | ||
|
|
69fc6987ad | ||
|
|
0790f7a75f | ||
|
|
db8fbe7661 | ||
|
|
91c41cd6b3 | ||
|
|
1bf1cfd07a | ||
|
|
51d94a4a6e | ||
|
|
e18abfd013 | ||
|
|
6c7ea8191f | ||
|
|
329dca42a7 | ||
|
|
0362aaba5a | ||
|
|
948dea8bb4 | ||
|
|
6155e9139d | ||
|
|
6dd8405aed | ||
|
|
c076f4e789 | ||
|
|
d32a132b51 | ||
|
|
0e3779b14f | ||
|
|
fe1c35f8d0 | ||
|
|
b4588f1798 | ||
|
|
64c7c1be15 | ||
|
|
1c817aa7bd | ||
|
|
d02eaa8881 | ||
|
|
b92176024c | ||
|
|
1c746e0819 | ||
|
|
166aae4418 | ||
|
|
58933aac2a | ||
|
|
45aa4e5229 | ||
|
|
e643e99586 | ||
|
|
9f6d6f630d | ||
|
|
812ba87f37 | ||
|
|
2b0887fa5f | ||
|
|
ee8d2f9950 | ||
|
|
51d29cf4c6 | ||
|
|
e378496b24 | ||
|
|
4d21274a29 | ||
|
|
705442cf4e | ||
|
|
87b4cb283f | ||
|
|
83505b790d | ||
|
|
2ca6d9f039 | ||
|
|
87b8769680 | ||
|
|
78e7d7aed0 | ||
|
|
68b353be0d | ||
|
|
a756dc8106 | ||
|
|
604e27658c | ||
|
|
dfa273dc25 | ||
|
|
5263656df6 | ||
|
|
d88fe07377 | ||
|
|
2e23057804 | ||
|
|
303f49a5ad | ||
|
|
adbbd164ff | ||
|
|
93424b0f9c | ||
|
|
fb7706d4c7 | ||
|
|
4131923c0f | ||
|
|
7b95af2400 | ||
|
|
eb6481c663 | ||
|
|
c126cac98d | ||
|
|
e3a8b1e062 | ||
|
|
fa6d5bd15b | ||
|
|
f2c8a837af | ||
|
|
ccc1b21ebd | ||
|
|
85f2a78005 | ||
|
|
e21202b730 | ||
|
|
dc0535f7d5 | ||
|
|
f2791988f9 | ||
|
|
8e83af1c33 | ||
|
|
268c051eba | ||
|
|
03cb37b1eb | ||
|
|
d5c3265763 | ||
|
|
13e0340350 | ||
|
|
5093d8b925 | ||
|
|
40187ec9bb | ||
|
|
f5f8695587 | ||
|
|
27f5213718 | ||
|
|
b83a3bcc80 | ||
|
|
3a3f69372f | ||
|
|
4a08ee2b74 | ||
|
|
82ba764bb6 | ||
|
|
94e31e414a | ||
|
|
a94a6b4282 | ||
|
|
af0edf0d10 | ||
|
|
8307270cec | ||
|
|
c4fe622b82 | ||
|
|
b28977fbaf | ||
|
|
96cb1208d3 | ||
|
|
0c8e71faa5 | ||
|
|
d965101f6a | ||
|
|
d15ee2fb87 | ||
|
|
826d1e6153 | ||
|
|
50c9e3f654 | ||
|
|
59b8ea1746 | ||
|
|
cf02fb60c1 | ||
|
|
679d72eedf | ||
|
|
03b23e2587 | ||
|
|
48e6823c7a | ||
|
|
6b4e6eee09 | ||
|
|
f7648e11d8 | ||
|
|
7bb7d1205c | ||
|
|
a1d41c6811 | ||
|
|
58e0301f87 | ||
|
|
a5e7b2760d | ||
|
|
efe438d3e8 | ||
|
|
ec0565fac5 | ||
|
|
48a6a504b6 | ||
|
|
8f55425898 | ||
|
|
a51e52aee3 | ||
|
|
69dfc75572 | ||
|
|
9d3e51af9f | ||
|
|
f7c1b9087a | ||
|
|
36c42b5c15 | ||
|
|
bc8ee95e72 | ||
|
|
979dfd20f2 | ||
|
|
67fbd24ebf | ||
|
|
7f7589afa9 | ||
|
|
4f01cda2a7 | ||
|
|
bd296c796f | ||
|
|
7144cec580 | ||
|
|
99a1188287 | ||
|
|
0b18b6094e | ||
|
|
ae53d04780 | ||
|
|
a324826dfd | ||
|
|
29bf205f3a | ||
|
|
3b9fd3abd8 | ||
|
|
974e4e3a9d | ||
|
|
369b7709f7 | ||
|
|
78438db752 | ||
|
|
a2f4a11301 | ||
|
|
077c468589 | ||
|
|
d4fe273b2f | ||
|
|
761a95e542 | ||
|
|
5ae04397bd | ||
|
|
2c230f910d | ||
|
|
ae54151467 | ||
|
|
05af53d160 |
@@ -15,7 +15,7 @@ syntax:glob
|
||||
*.orig
|
||||
*~
|
||||
|
||||
doc/_build
|
||||
doc/*/_build
|
||||
build/
|
||||
dist/
|
||||
*.egg-info
|
||||
@@ -23,3 +23,5 @@ issue/
|
||||
env/
|
||||
3rdparty/
|
||||
.tox
|
||||
.cache
|
||||
.coverage
|
||||
|
||||
6
.hgtags
6
.hgtags
@@ -43,3 +43,9 @@ c777dcad166548b7499564cb49ae5c8b4b07f935 2.0.3
|
||||
363e5a5a59c803e6bc176a6f9cc4bf1a1ca2dab0 2.0.3
|
||||
e5e1746a197f0398356a43fbe2eebac9690f795d 2.1.0
|
||||
5864412c6f3c903384243bd315639d101d7ebc67 2.1.2
|
||||
12a05d59249f80276e25fd8b96e8e545b1332b7a 2.1.3
|
||||
1522710369337d96bf9568569d5f0ca9b38a74e0 2.2.0
|
||||
3da8cec6c5326ed27c144c9b6d7a64a648370005 2.2.1
|
||||
92b916483c1e65a80dc80e3f7816b39e84b36a4d 2.2.2
|
||||
3c11c5c9776f3c678719161e96cc0a08169c1cb8 2.2.3
|
||||
ad9fe504a371ad8eb613052d58f229aa66f53527 2.2.4
|
||||
|
||||
1
AUTHORS
1
AUTHORS
@@ -22,3 +22,4 @@ Jan Balster
|
||||
Grig Gheorghiu
|
||||
Bob Ippolito
|
||||
Christian Tismer
|
||||
Daniel Nuri
|
||||
|
||||
174
CHANGELOG
174
CHANGELOG
@@ -1,3 +1,177 @@
|
||||
Changes between 2.2.4 and 2.3.0
|
||||
-----------------------------------
|
||||
|
||||
- fix issue202 - better automatic names for parametrized test functions
|
||||
- fix issue139 - introduce @pytest.fixture which allows direct scoping
|
||||
and parametrization of funcarg factories. Introduce new @pytest.setup
|
||||
marker to allow the writing of setup functions which accept funcargs.
|
||||
- fix issue198 - conftest fixtures were not found on windows32 in some
|
||||
circumstances with nested directory structures due to path manipulation issues
|
||||
- fix issue193 skip test functions with were parametrized with empty
|
||||
parameter sets
|
||||
- fix python3.3 compat, mostly reporting bits that previously depended
|
||||
on dict ordering
|
||||
- introduce re-ordering of tests by resource and parametrization setup
|
||||
which takes precedence to the usual file-ordering
|
||||
- fix issue185 monkeypatching time.time does not cause pytest to fail
|
||||
- fix issue172 duplicate call of pytest.setup-decoratored setup_module
|
||||
functions
|
||||
- fix junitxml=path construction so that if tests change the
|
||||
current working directory and the path is a relative path
|
||||
it is constructed correctly from the original current working dir.
|
||||
- fix "python setup.py test" example to cause a proper "errno" return
|
||||
- fix issue165 - fix broken doc links and mention stackoverflow for FAQ
|
||||
- catch unicode-issues when writing failure representations
|
||||
to terminal to prevent the whole session from crashing
|
||||
- fix xfail/skip confusion: a skip-mark or an imperative pytest.skip
|
||||
will now take precedence before xfail-markers because we
|
||||
can't determine xfail/xpass status in case of a skip. see also:
|
||||
http://stackoverflow.com/questions/11105828/in-py-test-when-i-explicitly-skip-a-test-that-is-marked-as-xfail-how-can-i-get
|
||||
|
||||
- always report installed 3rd party plugins in the header of a test run
|
||||
|
||||
- fix issue160: a failing setup of an xfail-marked tests should
|
||||
be reported as xfail (not xpass)
|
||||
|
||||
- fix issue128: show captured output when capsys/capfd are used
|
||||
|
||||
- fix issue179: propperly show the dependency chain of factories
|
||||
|
||||
- pluginmanager.register(...) now raises ValueError if the
|
||||
plugin has been already registered or the name is taken
|
||||
|
||||
- fix issue159: improve http://pytest.org/latest/faq.html
|
||||
especially with respect to the "magic" history, also mention
|
||||
pytest-django, trial and unittest integration.
|
||||
|
||||
- make request.keywords and node.keywords writable. All descendant
|
||||
collection nodes will see keyword values. Keywords are dictionaries
|
||||
containing markers and other info.
|
||||
|
||||
- fix issue 178: xml binary escapes are now wrapped in py.xml.raw
|
||||
|
||||
- fix issue 176: correctly catch the builtin AssertionError
|
||||
even when we replaced AssertionError with a subclass on the
|
||||
python level
|
||||
|
||||
- factory discovery no longer fails with magic global callables
|
||||
that provide no sane __code__ object (mock.call for example)
|
||||
|
||||
- fix issue 182: testdir.inprocess_run now considers passed plugins
|
||||
|
||||
- fix issue 188: ensure sys.exc_info is clear on python2
|
||||
before calling into a test
|
||||
|
||||
- fix issue 191: add unittest TestCase runTest method support
|
||||
- fix issue 156: monkeypatch correctly handles class level descriptors
|
||||
|
||||
- reporting refinements:
|
||||
|
||||
- pytest_report_header now receives a "startdir" so that
|
||||
you can use startdir.bestrelpath(yourpath) to show
|
||||
nice relative path
|
||||
|
||||
- allow plugins to implement both pytest_report_header and
|
||||
pytest_sessionstart (sessionstart is invoked first).
|
||||
|
||||
- don't show deselected reason line if there is none
|
||||
|
||||
- py.test -vv will show all of assert comparisations instead of truncating
|
||||
|
||||
Changes between 2.2.3 and 2.2.4
|
||||
-----------------------------------
|
||||
|
||||
- fix error message for rewritten assertions involving the % operator
|
||||
- fix issue 126: correctly match all invalid xml characters for junitxml
|
||||
binary escape
|
||||
- fix issue with unittest: now @unittest.expectedFailure markers should
|
||||
be processed correctly (you can also use @pytest.mark markers)
|
||||
- document integration with the extended distribute/setuptools test commands
|
||||
- fix issue 140: propperly get the real functions
|
||||
of bound classmethods for setup/teardown_class
|
||||
- fix issue #141: switch from the deceased paste.pocoo.org to bpaste.net
|
||||
- fix issue #143: call unconfigure/sessionfinish always when
|
||||
configure/sessionstart where called
|
||||
- fix issue #144: better mangle test ids to junitxml classnames
|
||||
- upgrade distribute_setup.py to 0.6.27
|
||||
|
||||
Changes between 2.2.2 and 2.2.3
|
||||
----------------------------------------
|
||||
|
||||
- fix uploaded package to only include neccesary files
|
||||
|
||||
Changes between 2.2.1 and 2.2.2
|
||||
----------------------------------------
|
||||
|
||||
- fix issue101: wrong args to unittest.TestCase test function now
|
||||
produce better output
|
||||
- fix issue102: report more useful errors and hints for when a
|
||||
test directory was renamed and some pyc/__pycache__ remain
|
||||
- fix issue106: allow parametrize to be applied multiple times
|
||||
e.g. from module, class and at function level.
|
||||
- fix issue107: actually perform session scope finalization
|
||||
- don't check in parametrize if indirect parameters are funcarg names
|
||||
- add chdir method to monkeypatch funcarg
|
||||
- fix crash resulting from calling monkeypatch undo a second time
|
||||
- fix issue115: make --collectonly robust against early failure
|
||||
(missing files/directories)
|
||||
- "-qq --collectonly" now shows only files and the number of tests in them
|
||||
- "-q --collectonly" now shows test ids
|
||||
- allow adding of attributes to test reports such that it also works
|
||||
with distributed testing (no upgrade of pytest-xdist needed)
|
||||
|
||||
Changes between 2.2.0 and 2.2.1
|
||||
----------------------------------------
|
||||
|
||||
- fix issue99 (in pytest and py) internallerrors with resultlog now
|
||||
produce better output - fixed by normalizing pytest_internalerror
|
||||
input arguments.
|
||||
- fix issue97 / traceback issues (in pytest and py) improve traceback output
|
||||
in conjunction with jinja2 and cython which hack tracebacks
|
||||
- fix issue93 (in pytest and pytest-xdist) avoid "delayed teardowns":
|
||||
the final test in a test node will now run its teardown directly
|
||||
instead of waiting for the end of the session. Thanks Dave Hunt for
|
||||
the good reporting and feedback. The pytest_runtest_protocol as well
|
||||
as the pytest_runtest_teardown hooks now have "nextitem" available
|
||||
which will be None indicating the end of the test run.
|
||||
- fix collection crash due to unknown-source collected items, thanks
|
||||
to Ralf Schmitt (fixed by depending on a more recent pylib)
|
||||
|
||||
Changes between 2.1.3 and 2.2.0
|
||||
----------------------------------------
|
||||
|
||||
- fix issue90: introduce eager tearing down of test items so that
|
||||
teardown function are called earlier.
|
||||
- add an all-powerful metafunc.parametrize function which allows to
|
||||
parametrize test function arguments in multiple steps and therefore
|
||||
from indepdenent plugins and palces.
|
||||
- add a @pytest.mark.parametrize helper which allows to easily
|
||||
call a test function with different argument values
|
||||
- Add examples to the "parametrize" example page, including a quick port
|
||||
of Test scenarios and the new parametrize function and decorator.
|
||||
- introduce registration for "pytest.mark.*" helpers via ini-files
|
||||
or through plugin hooks. Also introduce a "--strict" option which
|
||||
will treat unregistered markers as errors
|
||||
allowing to avoid typos and maintain a well described set of markers
|
||||
for your test suite. See exaples at http://pytest.org/latest/mark.html
|
||||
and its links.
|
||||
- issue50: introduce "-m marker" option to select tests based on markers
|
||||
(this is a stricter and more predictable version of '-k' in that "-m"
|
||||
only matches complete markers and has more obvious rules for and/or
|
||||
semantics.
|
||||
- new feature to help optimizing the speed of your tests:
|
||||
--durations=N option for displaying N slowest test calls
|
||||
and setup/teardown methods.
|
||||
- fix issue87: --pastebin now works with python3
|
||||
- fix issue89: --pdb with unexpected exceptions in doctest work more sensibly
|
||||
- fix and cleanup pytest's own test suite to not leak FDs
|
||||
- fix issue83: link to generated funcarg list
|
||||
- fix issue74: pyarg module names are now checked against imp.find_module false positives
|
||||
- fix compatibility with twisted/trial-11.1.0 use cases
|
||||
- simplify Node.listchain
|
||||
- simplify junitxml output code by relying on py.xml
|
||||
- add support for skip properties on unittest classes and functions
|
||||
|
||||
Changes between 2.1.2 and 2.1.3
|
||||
----------------------------------------
|
||||
|
||||
|
||||
269
ISSUES.txt
269
ISSUES.txt
@@ -1,3 +1,125 @@
|
||||
|
||||
improve / add to dependency/test resource injection
|
||||
-------------------------------------------------------------
|
||||
tags: wish feature docs
|
||||
|
||||
write up better examples showing the connection between
|
||||
the two.
|
||||
|
||||
refine parametrize API
|
||||
-------------------------------------------------------------
|
||||
tags: critical feature
|
||||
|
||||
extend metafunc.parametrize to directly support indirection, example:
|
||||
|
||||
def setupdb(request, config):
|
||||
# setup "resource" based on test request and the values passed
|
||||
# in to parametrize. setupfunc is called for each such value.
|
||||
# you may use request.addfinalizer() or request.cached_setup ...
|
||||
return dynamic_setup_database(val)
|
||||
|
||||
@pytest.mark.parametrize("db", ["pg", "mysql"], setupfunc=setupdb)
|
||||
def test_heavy_functional_test(db):
|
||||
...
|
||||
|
||||
There would be no need to write or explain funcarg factories and
|
||||
their special __ syntax.
|
||||
|
||||
The examples and improvements should also show how to put the parametrize
|
||||
decorator to a class, to a module or even to a directory. For the directory
|
||||
part a conftest.py content like this::
|
||||
|
||||
pytestmark = [
|
||||
@pytest.mark.parametrize_setup("db", ...),
|
||||
]
|
||||
|
||||
probably makes sense in order to keep the declarative nature. This mirrors
|
||||
the marker-mechanism with respect to a test module but puts it to a directory
|
||||
scale.
|
||||
|
||||
When doing larger scoped parametrization it probably becomes neccessary
|
||||
to allow parametrization to be ignored if the according parameter is not
|
||||
used (currently any parametrized argument that is not present in a function will cause a ValueError). Example:
|
||||
|
||||
@pytest.mark.parametrize("db", ..., mustmatch=False)
|
||||
|
||||
means to not raise an error but simply ignore the parametrization
|
||||
if the signature of a decorated function does not match. XXX is it
|
||||
not sufficient to always allow non-matches?
|
||||
|
||||
|
||||
unify item/request classes, generalize items
|
||||
---------------------------------------------------------------
|
||||
tags: 2.4 wish
|
||||
|
||||
in lieu of extended parametrization and the new way to specify resource
|
||||
factories in terms of the parametrize decorator, consider unification
|
||||
of the item and request class. This also is connected with allowing
|
||||
funcargs in setup functions. Example of new item API:
|
||||
|
||||
item.getresource("db") # alias for request.getfuncargvalue
|
||||
item.addfinalizer(...)
|
||||
item.cached_setup(...)
|
||||
item.applymarker(...)
|
||||
|
||||
test classes/modules could then use this api via::
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
use item API ...
|
||||
|
||||
introduction of this new method needs to be _fully_ backward compatible -
|
||||
and the documentation needs to change along to mention this new way of
|
||||
doing things.
|
||||
|
||||
impl note: probably Request._fillfixtures would be called from the
|
||||
python plugins own pytest_runtest_setup(item) and would call
|
||||
item.getresource(X) for all X in the funcargs of a function.
|
||||
|
||||
XXX is it possible to even put the above item API to Nodes, i.e. also
|
||||
to Directorty/module/file/class collectors? Problem is that current
|
||||
funcarg factories presume they are called with a per-function (even
|
||||
per-funcarg-per-function) scope. Could there be small tweaks to the new
|
||||
API that lift this restriction?
|
||||
|
||||
consider::
|
||||
|
||||
def setup_class(cls, tmpdir):
|
||||
# would get a per-class tmpdir because tmpdir parametrization
|
||||
# would know that it is called with a class scope
|
||||
#
|
||||
#
|
||||
#
|
||||
this looks very difficult because those setup functions are also used
|
||||
by nose etc. Rather consider introduction of a new setup hook:
|
||||
|
||||
def setup_test(self, item):
|
||||
self.db = item.cached_setup(..., scope='class')
|
||||
self.tmpdir = item.getresource("tmpdir")
|
||||
|
||||
this should be compatible to unittest/nose and provide much of what
|
||||
"testresources" provide. XXX This would not allow full parametrization
|
||||
such that test function could be run multiple times with different
|
||||
values. See "parametrized attributes" issue.
|
||||
|
||||
allow parametrized attributes on classes
|
||||
--------------------------------------------------
|
||||
|
||||
tags: wish 2.4
|
||||
|
||||
example:
|
||||
|
||||
@pytest.mark.parametrize_attr("db", setupfunc, [1,2,3], scope="class")
|
||||
@pytest.mark.parametrize_attr("tmp", setupfunc, scope="...")
|
||||
class TestMe:
|
||||
def test_hello(self):
|
||||
access self.db ...
|
||||
|
||||
this would run the test_hello() function three times with three
|
||||
different values for self.db. This could also work with unittest/nose
|
||||
style tests, i.e. it leverages existing test suites without needing
|
||||
to rewrite them. Together with the previously mentioned setup_test()
|
||||
maybe the setupfunc could be ommitted?
|
||||
|
||||
checks / deprecations for next release
|
||||
---------------------------------------------------------------
|
||||
tags: bug 2.4 core xdist
|
||||
@@ -25,21 +147,9 @@ appropriately to avoid this issue. Moreover/Alternatively, we could
|
||||
record which implementations of a hook succeeded and only call their
|
||||
teardown.
|
||||
|
||||
do early-teardown of test modules
|
||||
-----------------------------------------
|
||||
tags: feature 2.2
|
||||
|
||||
currently teardowns are called when the next tests is setup
|
||||
except for the function/method level where interally
|
||||
"teardown_exact" tears down immediately. Generalize
|
||||
this to perform the "neccessary" teardown compared to
|
||||
the "next" test item during teardown - this should
|
||||
get rid of some irritations because otherwise e.g.
|
||||
prints of teardown-code appear in the setup of the next test.
|
||||
|
||||
consider and document __init__ file usage in test directories
|
||||
---------------------------------------------------------------
|
||||
tags: bug 2.2 core
|
||||
tags: bug core
|
||||
|
||||
Currently, a test module is imported with its fully qualified
|
||||
package path, determined by checking __init__ files upwards.
|
||||
@@ -54,7 +164,7 @@ certain scenarios makes sense.
|
||||
|
||||
relax requirement to have tests/testing contain an __init__
|
||||
----------------------------------------------------------------
|
||||
tags: feature 2.2
|
||||
tags: feature
|
||||
bb: http://bitbucket.org/hpk42/py-trunk/issue/64
|
||||
|
||||
A local test run of a "tests" directory may work
|
||||
@@ -65,7 +175,7 @@ i.e. port the nose-logic of unloading a test module.
|
||||
|
||||
customize test function collection
|
||||
-------------------------------------------------------
|
||||
tags: feature 2.2
|
||||
tags: feature
|
||||
|
||||
- introduce py.test.mark.nocollect for not considering a function for
|
||||
test collection at all. maybe also introduce a py.test.mark.test to
|
||||
@@ -74,7 +184,7 @@ tags: feature 2.2
|
||||
|
||||
introduce pytest.mark.importorskip
|
||||
-------------------------------------------------------
|
||||
tags: feature 2.2
|
||||
tags: feature
|
||||
|
||||
in addition to the imperative pytest.importorskip also introduce
|
||||
a pytest.mark.importorskip so that the test count is more correct.
|
||||
@@ -82,7 +192,7 @@ a pytest.mark.importorskip so that the test count is more correct.
|
||||
|
||||
introduce py.test.mark.platform
|
||||
-------------------------------------------------------
|
||||
tags: feature 2.2
|
||||
tags: feature
|
||||
|
||||
Introduce nice-to-spell platform-skipping, examples:
|
||||
|
||||
@@ -99,44 +209,36 @@ interpreter versions.
|
||||
|
||||
pytest.mark.xfail signature change
|
||||
-------------------------------------------------------
|
||||
tags: feature 2.2
|
||||
tags: feature
|
||||
|
||||
change to pytest.mark.xfail(reason, (optional)condition)
|
||||
to better implement the word meaning. It also signals
|
||||
better that we always have some kind of an implementation
|
||||
reason that can be formualated.
|
||||
Compatibility? Maybe rename to "pytest.mark.xfail"?
|
||||
|
||||
introduce py.test.mark registration
|
||||
-----------------------------------------
|
||||
tags: feature 2.2
|
||||
|
||||
introduce a hook that allows to register a named mark decorator
|
||||
with documentation and add "py.test --marks" to get
|
||||
a list of available marks. Deprecate "dynamic" mark
|
||||
definitions.
|
||||
Compatibility? how to introduce a new name/keep compat?
|
||||
|
||||
allow to non-intrusively apply skipfs/xfail/marks
|
||||
---------------------------------------------------
|
||||
tags: feature 2.2
|
||||
tags: feature
|
||||
|
||||
use case: mark a module or directory structures
|
||||
to be skipped on certain platforms (i.e. no import
|
||||
attempt will be made).
|
||||
|
||||
consider introducing a hook/mechanism that allows to apply marks
|
||||
from conftests or plugins.
|
||||
from conftests or plugins. (See extended parametrization)
|
||||
|
||||
|
||||
explicit referencing of conftest.py files
|
||||
-----------------------------------------
|
||||
tags: feature 2.2
|
||||
tags: feature
|
||||
|
||||
allow to name conftest.py files (in sub directories) that should
|
||||
be imported early, as to include command line options.
|
||||
|
||||
improve central py.test ini file
|
||||
----------------------------------
|
||||
tags: feature 2.2
|
||||
tags: feature
|
||||
|
||||
introduce more declarative configuration options:
|
||||
- (to-be-collected test directories)
|
||||
@@ -147,49 +249,24 @@ introduce more declarative configuration options:
|
||||
|
||||
new documentation
|
||||
----------------------------------
|
||||
tags: feature 2.2
|
||||
tags: feature
|
||||
|
||||
- logo py.test
|
||||
- examples for unittest or functional testing
|
||||
- resource management for functional testing
|
||||
- patterns: page object
|
||||
- parametrized testing
|
||||
- better / more integrated plugin docs
|
||||
|
||||
generalize parametrized testing to generate combinations
|
||||
-------------------------------------------------------------
|
||||
tags: feature 2.2
|
||||
|
||||
think about extending metafunc.addcall or add a new method to allow to
|
||||
generate tests with combinations of all generated versions - what to do
|
||||
about "id" and "param" in such combinations though?
|
||||
|
||||
introduce py.test.mark.multi
|
||||
-----------------------------------------
|
||||
tags: feature 1.3
|
||||
|
||||
introduce py.test.mark.multi to specify a number
|
||||
of values for a given function argument.
|
||||
|
||||
have imported module mismatch honour relative paths
|
||||
--------------------------------------------------------
|
||||
tags: bug 2.2
|
||||
tags: bug
|
||||
|
||||
With 1.1.1 py.test fails at least on windows if an import
|
||||
is relative and compared against an absolute conftest.py
|
||||
path. Normalize.
|
||||
|
||||
call termination with small timeout
|
||||
-------------------------------------------------
|
||||
tags: feature 2.2
|
||||
test: testing/pytest/dist/test_dsession.py - test_terminate_on_hanging_node
|
||||
|
||||
Call gateway group termination with a small timeout if available.
|
||||
Should make dist-testing less likely to leave lost processes.
|
||||
|
||||
consider globals: py.test.ensuretemp and config
|
||||
--------------------------------------------------------------
|
||||
tags: experimental-wish 2.2
|
||||
tags: experimental-wish
|
||||
|
||||
consider deprecating py.test.ensuretemp and py.test.config
|
||||
to further reduce py.test globality. Also consider
|
||||
@@ -198,7 +275,7 @@ a plugin rather than being there from the start.
|
||||
|
||||
consider allowing funcargs for setup methods
|
||||
--------------------------------------------------------------
|
||||
tags: experimental-wish 2.2
|
||||
tags: experimental-wish
|
||||
|
||||
Users have expressed the wish to have funcargs available to setup
|
||||
functions. Experiment with allowing funcargs there - it might
|
||||
@@ -208,13 +285,20 @@ factories with a request object that not have a cls/function
|
||||
attributes. However, how to handle parametrized test functions
|
||||
and funcargs?
|
||||
|
||||
setup_function -> request can be like it is now
|
||||
setup_class -> request has no request.function
|
||||
setup_module -> request has no request.cls
|
||||
maybe introduce a setup method like:
|
||||
|
||||
setup_invocation(self, request)
|
||||
|
||||
which has full access to the test invocation through "request"
|
||||
through which you can get funcargvalues, use cached_setup etc.
|
||||
Therefore, the access to funcargs would be indirect but it
|
||||
could be consistently implemented. setup_invocation() would
|
||||
be a "glue" function for bringing together the xUnit and funcargs
|
||||
world.
|
||||
|
||||
consider pytest_addsyspath hook
|
||||
-----------------------------------------
|
||||
tags: 2.2
|
||||
tags:
|
||||
|
||||
py.test could call a new pytest_addsyspath() in order to systematically
|
||||
allow manipulation of sys.path and to inhibit it via --no-addsyspath
|
||||
@@ -226,7 +310,7 @@ and pytest_configure.
|
||||
|
||||
show plugin information in test header
|
||||
----------------------------------------------------------------
|
||||
tags: feature 2.2
|
||||
tags: feature
|
||||
|
||||
Now that external plugins are becoming more numerous
|
||||
it would be useful to have external plugins along with
|
||||
@@ -234,7 +318,7 @@ their versions displayed as a header line.
|
||||
|
||||
deprecate global py.test.config usage
|
||||
----------------------------------------------------------------
|
||||
tags: feature 2.2
|
||||
tags: feature
|
||||
|
||||
py.test.ensuretemp and py.test.config are probably the last
|
||||
objects containing global state. Often using them is not
|
||||
@@ -244,9 +328,58 @@ as others.
|
||||
|
||||
remove deprecated bits in collect.py
|
||||
-------------------------------------------------------------------
|
||||
tags: feature 2.2
|
||||
tags: feature
|
||||
|
||||
In an effort to further simplify code, review and remove deprecated bits
|
||||
in collect.py. Probably good:
|
||||
- inline consider_file/dir methods, no need to have them
|
||||
subclass-overridable because of hooks
|
||||
|
||||
implement fslayout decorator
|
||||
---------------------------------
|
||||
tags: feature
|
||||
|
||||
Improve the way how tests can work with pre-made examples,
|
||||
keeping the layout close to the test function:
|
||||
|
||||
@pytest.mark.fslayout("""
|
||||
conftest.py:
|
||||
# empty
|
||||
tests/
|
||||
test_%(NAME)s: # becomes test_run1.py
|
||||
def test_function(self):
|
||||
pass
|
||||
""")
|
||||
def test_run(pytester, fslayout):
|
||||
p = fslayout.findone("test_*.py")
|
||||
result = pytester.runpytest(p)
|
||||
assert result.ret == 0
|
||||
assert result.passed == 1
|
||||
|
||||
Another idea is to allow to define a full scenario including the run
|
||||
in one content string::
|
||||
|
||||
runscenario("""
|
||||
test_{TESTNAME}.py:
|
||||
import pytest
|
||||
@pytest.mark.xfail
|
||||
def test_that_fails():
|
||||
assert 0
|
||||
|
||||
@pytest.mark.skipif("True")
|
||||
def test_hello():
|
||||
pass
|
||||
|
||||
conftest.py:
|
||||
import pytest
|
||||
def pytest_runsetup_setup(item):
|
||||
pytest.skip("abc")
|
||||
|
||||
runpytest -rsxX
|
||||
*SKIP*{TESTNAME}*
|
||||
*1 skipped*
|
||||
""")
|
||||
|
||||
This could be run with at least three different ways to invoke pytest:
|
||||
through the shell, through "python -m pytest" and inlined. As inlined
|
||||
would be the fastest it could be run first (or "--fast" mode).
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
#
|
||||
__version__ = '2.1.3'
|
||||
__version__ = '2.3.0'
|
||||
|
||||
@@ -50,7 +50,7 @@ def pytest_configure(config):
|
||||
hook = None
|
||||
if mode == "rewrite":
|
||||
hook = rewrite.AssertionRewritingHook()
|
||||
sys.meta_path.append(hook)
|
||||
sys.meta_path.insert(0, hook)
|
||||
warn_about_missing_assertion(mode)
|
||||
config._assertstate = AssertionState(config, mode)
|
||||
config._assertstate.hook = hook
|
||||
@@ -73,8 +73,12 @@ def pytest_runtest_setup(item):
|
||||
def callbinrepr(op, left, right):
|
||||
hook_result = item.ihook.pytest_assertrepr_compare(
|
||||
config=item.config, op=op, left=left, right=right)
|
||||
|
||||
for new_expl in hook_result:
|
||||
if new_expl:
|
||||
# Don't include pageloads of data unless we are very verbose (-vv)
|
||||
if len(''.join(new_expl[1:])) > 80*8 and item.config.option.verbose < 2:
|
||||
new_expl[1:] = ['Detailed information too verbose, truncated']
|
||||
res = '\n~'.join(new_expl)
|
||||
if item.config.getvalue("assertmode") == "rewrite":
|
||||
# The result will be fed back a python % formatting
|
||||
|
||||
@@ -1,8 +1,7 @@
|
||||
import py
|
||||
import sys, inspect
|
||||
from compiler import parse, ast, pycodegen
|
||||
from _pytest.assertion.util import format_explanation
|
||||
from _pytest.assertion.reinterpret import BuiltinAssertionError
|
||||
from _pytest.assertion.util import format_explanation, BuiltinAssertionError
|
||||
|
||||
passthroughex = py.builtin._sysex
|
||||
|
||||
@@ -527,10 +526,13 @@ if __name__ == '__main__':
|
||||
# example:
|
||||
def f():
|
||||
return 5
|
||||
|
||||
def g():
|
||||
return 3
|
||||
|
||||
def h(x):
|
||||
return 'never'
|
||||
|
||||
check("f() * g() == 5")
|
||||
check("not f()")
|
||||
check("not (f() and g() or 0)")
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
import sys
|
||||
import py
|
||||
|
||||
BuiltinAssertionError = py.builtin.builtins.AssertionError
|
||||
from _pytest.assertion.util import BuiltinAssertionError
|
||||
|
||||
class AssertionError(BuiltinAssertionError):
|
||||
def __init__(self, *args):
|
||||
@@ -45,4 +44,3 @@ if sys.version_info >= (2, 6) or (sys.platform.startswith("java")):
|
||||
from _pytest.assertion.newinterpret import interpret as reinterpret
|
||||
else:
|
||||
reinterpret = reinterpret_old
|
||||
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
"""Rewrite assertion AST to produce nice error messages"""
|
||||
|
||||
import ast
|
||||
import collections
|
||||
import errno
|
||||
import itertools
|
||||
import imp
|
||||
@@ -35,13 +34,13 @@ else:
|
||||
PYTEST_TAG = "%s-%s%s-PYTEST" % (impl, ver[0], ver[1])
|
||||
del ver, impl
|
||||
|
||||
PYC_EXT = ".py" + "c" if __debug__ else "o"
|
||||
PYC_EXT = ".py" + ("c" if __debug__ else "o")
|
||||
PYC_TAIL = "." + PYTEST_TAG + PYC_EXT
|
||||
|
||||
REWRITE_NEWLINES = sys.version_info[:2] != (2, 7) and sys.version_info < (3, 2)
|
||||
|
||||
class AssertionRewritingHook(object):
|
||||
"""Import hook which rewrites asserts."""
|
||||
"""PEP302 Import hook which rewrites asserts."""
|
||||
|
||||
def __init__(self):
|
||||
self.session = None
|
||||
@@ -96,7 +95,8 @@ class AssertionRewritingHook(object):
|
||||
finally:
|
||||
self.session = sess
|
||||
else:
|
||||
state.trace("matched test file (was specified on cmdline): %r" % (fn,))
|
||||
state.trace("matched test file (was specified on cmdline): %r" %
|
||||
(fn,))
|
||||
# The requested module looks like a test file, so rewrite it. This is
|
||||
# the most magical part of the process: load the source, rewrite the
|
||||
# asserts, and load the rewritten source. We also cache the rewritten
|
||||
@@ -122,14 +122,14 @@ class AssertionRewritingHook(object):
|
||||
# because we're in a zip file.
|
||||
write = False
|
||||
elif e == errno.EACCES:
|
||||
state.trace("read only directory: %r" % (fn_pypath.dirname,))
|
||||
state.trace("read only directory: %r" % fn_pypath.dirname)
|
||||
write = False
|
||||
else:
|
||||
raise
|
||||
cache_name = fn_pypath.basename[:-3] + PYC_TAIL
|
||||
pyc = os.path.join(cache_dir, cache_name)
|
||||
# Notice that even if we're in a read-only directory, I'm going to check
|
||||
# for a cached pyc. This may not be optimal...
|
||||
# Notice that even if we're in a read-only directory, I'm going
|
||||
# to check for a cached pyc. This may not be optimal...
|
||||
co = _read_pyc(fn_pypath, pyc)
|
||||
if co is None:
|
||||
state.trace("rewriting %r" % (fn,))
|
||||
@@ -161,10 +161,11 @@ class AssertionRewritingHook(object):
|
||||
return sys.modules[name]
|
||||
|
||||
def _write_pyc(co, source_path, pyc):
|
||||
# Technically, we don't have to have the same pyc format as (C)Python, since
|
||||
# these "pycs" should never be seen by builtin import. However, there's
|
||||
# little reason deviate, and I hope sometime to be able to use
|
||||
# imp.load_compiled to load them. (See the comment in load_module above.)
|
||||
# Technically, we don't have to have the same pyc format as
|
||||
# (C)Python, since these "pycs" should never be seen by builtin
|
||||
# import. However, there's little reason deviate, and I hope
|
||||
# sometime to be able to use imp.load_compiled to load them. (See
|
||||
# the comment in load_module above.)
|
||||
mtime = int(source_path.mtime())
|
||||
try:
|
||||
fp = open(pyc, "wb")
|
||||
@@ -241,9 +242,8 @@ def _read_pyc(source, pyc):
|
||||
except EnvironmentError:
|
||||
return None
|
||||
# Check for invalid or out of date pyc file.
|
||||
if (len(data) != 8 or
|
||||
data[:4] != imp.get_magic() or
|
||||
struct.unpack("<l", data[4:])[0] != mtime):
|
||||
if (len(data) != 8 or data[:4] != imp.get_magic() or
|
||||
struct.unpack("<l", data[4:])[0] != mtime):
|
||||
return None
|
||||
co = marshal.load(fp)
|
||||
if not isinstance(co, types.CodeType):
|
||||
@@ -281,35 +281,35 @@ def _call_reprcompare(ops, results, expls, each_obj):
|
||||
|
||||
|
||||
unary_map = {
|
||||
ast.Not : "not %s",
|
||||
ast.Invert : "~%s",
|
||||
ast.USub : "-%s",
|
||||
ast.UAdd : "+%s"
|
||||
ast.Not: "not %s",
|
||||
ast.Invert: "~%s",
|
||||
ast.USub: "-%s",
|
||||
ast.UAdd: "+%s"
|
||||
}
|
||||
|
||||
binop_map = {
|
||||
ast.BitOr : "|",
|
||||
ast.BitXor : "^",
|
||||
ast.BitAnd : "&",
|
||||
ast.LShift : "<<",
|
||||
ast.RShift : ">>",
|
||||
ast.Add : "+",
|
||||
ast.Sub : "-",
|
||||
ast.Mult : "*",
|
||||
ast.Div : "/",
|
||||
ast.FloorDiv : "//",
|
||||
ast.Mod : "%",
|
||||
ast.Eq : "==",
|
||||
ast.NotEq : "!=",
|
||||
ast.Lt : "<",
|
||||
ast.LtE : "<=",
|
||||
ast.Gt : ">",
|
||||
ast.GtE : ">=",
|
||||
ast.Pow : "**",
|
||||
ast.Is : "is",
|
||||
ast.IsNot : "is not",
|
||||
ast.In : "in",
|
||||
ast.NotIn : "not in"
|
||||
ast.BitOr: "|",
|
||||
ast.BitXor: "^",
|
||||
ast.BitAnd: "&",
|
||||
ast.LShift: "<<",
|
||||
ast.RShift: ">>",
|
||||
ast.Add: "+",
|
||||
ast.Sub: "-",
|
||||
ast.Mult: "*",
|
||||
ast.Div: "/",
|
||||
ast.FloorDiv: "//",
|
||||
ast.Mod: "%%", # escaped for string formatting
|
||||
ast.Eq: "==",
|
||||
ast.NotEq: "!=",
|
||||
ast.Lt: "<",
|
||||
ast.LtE: "<=",
|
||||
ast.Gt: ">",
|
||||
ast.GtE: ">=",
|
||||
ast.Pow: "**",
|
||||
ast.Is: "is",
|
||||
ast.IsNot: "is not",
|
||||
ast.In: "in",
|
||||
ast.NotIn: "not in"
|
||||
}
|
||||
|
||||
|
||||
@@ -342,7 +342,7 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
lineno = 0
|
||||
for item in mod.body:
|
||||
if (expect_docstring and isinstance(item, ast.Expr) and
|
||||
isinstance(item.value, ast.Str)):
|
||||
isinstance(item.value, ast.Str)):
|
||||
doc = item.value.s
|
||||
if "PYTEST_DONT_REWRITE" in doc:
|
||||
# The module has disabled assertion rewriting.
|
||||
@@ -463,7 +463,8 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
body.append(raise_)
|
||||
# Clear temporary variables by setting them to None.
|
||||
if self.variables:
|
||||
variables = [ast.Name(name, ast.Store()) for name in self.variables]
|
||||
variables = [ast.Name(name, ast.Store())
|
||||
for name in self.variables]
|
||||
clear = ast.Assign(variables, ast.Name("None", ast.Load()))
|
||||
self.statements.append(clear)
|
||||
# Fix line numbers.
|
||||
@@ -549,7 +550,8 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
new_kwarg, expl = self.visit(call.kwargs)
|
||||
arg_expls.append("**" + expl)
|
||||
expl = "%s(%s)" % (func_expl, ', '.join(arg_expls))
|
||||
new_call = ast.Call(new_func, new_args, new_kwargs, new_star, new_kwarg)
|
||||
new_call = ast.Call(new_func, new_args, new_kwargs,
|
||||
new_star, new_kwarg)
|
||||
res = self.assign(new_call)
|
||||
res_expl = self.explanation_param(self.display(res))
|
||||
outer_expl = "%s\n{%s = %s\n}" % (res_expl, res_expl, expl)
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
import py
|
||||
|
||||
BuiltinAssertionError = py.builtin.builtins.AssertionError
|
||||
|
||||
# The _reprcompare attribute on the util module is used by the new assertion
|
||||
# interpretation code and assertion rewriter to detect this plugin was
|
||||
@@ -115,16 +116,11 @@ def assertrepr_compare(op, left, right):
|
||||
excinfo = py.code.ExceptionInfo()
|
||||
explanation = ['(pytest_assertion plugin: representation of '
|
||||
'details failed. Probably an object has a faulty __repr__.)',
|
||||
str(excinfo)
|
||||
]
|
||||
|
||||
str(excinfo)]
|
||||
|
||||
if not explanation:
|
||||
return None
|
||||
|
||||
# Don't include pageloads of data, should be configurable
|
||||
if len(''.join(explanation)) > 80*8:
|
||||
explanation = ['Detailed information too verbose, truncated']
|
||||
|
||||
return [summary] + explanation
|
||||
|
||||
|
||||
@@ -11,20 +11,23 @@ def pytest_addoption(parser):
|
||||
group._addoption('-s', action="store_const", const="no", dest="capture",
|
||||
help="shortcut for --capture=no.")
|
||||
|
||||
@pytest.mark.tryfirst
|
||||
def pytest_cmdline_parse(pluginmanager, args):
|
||||
# we want to perform capturing already for plugin/conftest loading
|
||||
if '-s' in args or "--capture=no" in args:
|
||||
method = "no"
|
||||
elif hasattr(os, 'dup') and '--capture=sys' not in args:
|
||||
method = "fd"
|
||||
else:
|
||||
method = "sys"
|
||||
capman = CaptureManager(method)
|
||||
pluginmanager.register(capman, "capturemanager")
|
||||
|
||||
def addouterr(rep, outerr):
|
||||
for secname, content in zip(["out", "err"], outerr):
|
||||
if content:
|
||||
rep.sections.append(("Captured std%s" % secname, content))
|
||||
|
||||
def pytest_unconfigure(config):
|
||||
# registered in config.py during early conftest.py loading
|
||||
capman = config.pluginmanager.getplugin('capturemanager')
|
||||
while capman._method2capture:
|
||||
name, cap = capman._method2capture.popitem()
|
||||
# XXX logging module may wants to close it itself on process exit
|
||||
# otherwise we could do finalization here and call "reset()".
|
||||
cap.suspend()
|
||||
|
||||
class NoCapture:
|
||||
def startall(self):
|
||||
pass
|
||||
@@ -36,8 +39,9 @@ class NoCapture:
|
||||
return "", ""
|
||||
|
||||
class CaptureManager:
|
||||
def __init__(self):
|
||||
def __init__(self, defaultmethod=None):
|
||||
self._method2capture = {}
|
||||
self._defaultmethod = defaultmethod
|
||||
|
||||
def _maketempfile(self):
|
||||
f = py.std.tempfile.TemporaryFile()
|
||||
@@ -62,14 +66,6 @@ class CaptureManager:
|
||||
else:
|
||||
raise ValueError("unknown capturing method: %r" % method)
|
||||
|
||||
def _getmethod_preoptionparse(self, args):
|
||||
if '-s' in args or "--capture=no" in args:
|
||||
return "no"
|
||||
elif hasattr(os, 'dup') and '--capture=sys' not in args:
|
||||
return "fd"
|
||||
else:
|
||||
return "sys"
|
||||
|
||||
def _getmethod(self, config, fspath):
|
||||
if config.option.capture:
|
||||
method = config.option.capture
|
||||
@@ -82,16 +78,22 @@ class CaptureManager:
|
||||
method = "sys"
|
||||
return method
|
||||
|
||||
def reset_capturings(self):
|
||||
for name, cap in self._method2capture.items():
|
||||
cap.reset()
|
||||
|
||||
def resumecapture_item(self, item):
|
||||
method = self._getmethod(item.config, item.fspath)
|
||||
if not hasattr(item, 'outerr'):
|
||||
item.outerr = ('', '') # we accumulate outerr on the item
|
||||
return self.resumecapture(method)
|
||||
|
||||
def resumecapture(self, method):
|
||||
def resumecapture(self, method=None):
|
||||
if hasattr(self, '_capturing'):
|
||||
raise ValueError("cannot resume, already capturing with %r" %
|
||||
(self._capturing,))
|
||||
if method is None:
|
||||
method = self._defaultmethod
|
||||
cap = self._method2capture.get(method)
|
||||
self._capturing = method
|
||||
if cap is None:
|
||||
@@ -117,22 +119,20 @@ class CaptureManager:
|
||||
return "", ""
|
||||
|
||||
def activate_funcargs(self, pyfuncitem):
|
||||
if not hasattr(pyfuncitem, 'funcargs'):
|
||||
return
|
||||
assert not hasattr(self, '_capturing_funcargs')
|
||||
self._capturing_funcargs = capturing_funcargs = []
|
||||
for name, capfuncarg in pyfuncitem.funcargs.items():
|
||||
if name in ('capsys', 'capfd'):
|
||||
capturing_funcargs.append(capfuncarg)
|
||||
capfuncarg._start()
|
||||
funcargs = getattr(pyfuncitem, "funcargs", None)
|
||||
if funcargs is not None:
|
||||
for name, capfuncarg in funcargs.items():
|
||||
if name in ('capsys', 'capfd'):
|
||||
assert not hasattr(self, '_capturing_funcarg')
|
||||
self._capturing_funcarg = capfuncarg
|
||||
capfuncarg._start()
|
||||
|
||||
def deactivate_funcargs(self):
|
||||
capturing_funcargs = getattr(self, '_capturing_funcargs', None)
|
||||
if capturing_funcargs is not None:
|
||||
while capturing_funcargs:
|
||||
capfuncarg = capturing_funcargs.pop()
|
||||
capfuncarg._finalize()
|
||||
del self._capturing_funcargs
|
||||
capturing_funcarg = getattr(self, '_capturing_funcarg', None)
|
||||
if capturing_funcarg:
|
||||
outerr = capturing_funcarg._finalize()
|
||||
del self._capturing_funcarg
|
||||
return outerr
|
||||
|
||||
def pytest_make_collect_report(self, __multicall__, collector):
|
||||
method = self._getmethod(collector.config, collector.fspath)
|
||||
@@ -161,26 +161,18 @@ class CaptureManager:
|
||||
def pytest_runtest_teardown(self, item):
|
||||
self.resumecapture_item(item)
|
||||
|
||||
def pytest__teardown_final(self, __multicall__, session):
|
||||
method = self._getmethod(session.config, None)
|
||||
self.resumecapture(method)
|
||||
try:
|
||||
rep = __multicall__.execute()
|
||||
finally:
|
||||
outerr = self.suspendcapture()
|
||||
if rep:
|
||||
addouterr(rep, outerr)
|
||||
return rep
|
||||
|
||||
def pytest_keyboard_interrupt(self, excinfo):
|
||||
if hasattr(self, '_capturing'):
|
||||
self.suspendcapture()
|
||||
|
||||
@pytest.mark.tryfirst
|
||||
def pytest_runtest_makereport(self, __multicall__, item, call):
|
||||
self.deactivate_funcargs()
|
||||
funcarg_outerr = self.deactivate_funcargs()
|
||||
rep = __multicall__.execute()
|
||||
outerr = self.suspendcapture(item)
|
||||
if funcarg_outerr is not None:
|
||||
outerr = (outerr[0] + funcarg_outerr[0],
|
||||
outerr[1] + funcarg_outerr[1])
|
||||
if not rep.passed:
|
||||
addouterr(rep, outerr)
|
||||
if not rep.passed or rep.when == "teardown":
|
||||
@@ -188,23 +180,29 @@ class CaptureManager:
|
||||
item.outerr = outerr
|
||||
return rep
|
||||
|
||||
error_capsysfderror = "cannot use capsys and capfd at the same time"
|
||||
|
||||
def pytest_funcarg__capsys(request):
|
||||
"""enables capturing of writes to sys.stdout/sys.stderr and makes
|
||||
captured output available via ``capsys.readouterr()`` method calls
|
||||
which return a ``(out, err)`` tuple.
|
||||
"""
|
||||
return CaptureFuncarg(py.io.StdCapture)
|
||||
if "capfd" in request._funcargs:
|
||||
raise request.raiseerror(error_capsysfderror)
|
||||
return CaptureFixture(py.io.StdCapture)
|
||||
|
||||
def pytest_funcarg__capfd(request):
|
||||
"""enables capturing of writes to file descriptors 1 and 2 and makes
|
||||
captured output available via ``capsys.readouterr()`` method calls
|
||||
which return a ``(out, err)`` tuple.
|
||||
"""
|
||||
if "capsys" in request._funcargs:
|
||||
request.raiseerror(error_capsysfderror)
|
||||
if not hasattr(os, 'dup'):
|
||||
py.test.skip("capfd funcarg needs os.dup")
|
||||
return CaptureFuncarg(py.io.StdCaptureFD)
|
||||
pytest.skip("capfd funcarg needs os.dup")
|
||||
return CaptureFixture(py.io.StdCaptureFD)
|
||||
|
||||
class CaptureFuncarg:
|
||||
class CaptureFixture:
|
||||
def __init__(self, captureclass):
|
||||
self.capture = captureclass(now=False)
|
||||
|
||||
@@ -213,8 +211,9 @@ class CaptureFuncarg:
|
||||
|
||||
def _finalize(self):
|
||||
if hasattr(self, 'capture'):
|
||||
self.capture.reset()
|
||||
outerr = self.capture.reset()
|
||||
del self.capture
|
||||
return outerr
|
||||
|
||||
def readouterr(self):
|
||||
return self.capture.readouterr()
|
||||
|
||||
@@ -11,8 +11,12 @@ def pytest_cmdline_parse(pluginmanager, args):
|
||||
return config
|
||||
|
||||
def pytest_unconfigure(config):
|
||||
for func in config._cleanup:
|
||||
func()
|
||||
while 1:
|
||||
try:
|
||||
fin = config._cleanup.pop()
|
||||
except IndexError:
|
||||
break
|
||||
fin()
|
||||
|
||||
class Parser:
|
||||
""" Parser for command line arguments. """
|
||||
@@ -31,9 +35,6 @@ class Parser:
|
||||
if option.dest:
|
||||
self._processopt(option)
|
||||
|
||||
def addnote(self, note):
|
||||
self._notes.append(note)
|
||||
|
||||
def getgroup(self, name, description="", after=None):
|
||||
""" get (or create) a named option Group.
|
||||
|
||||
@@ -79,6 +80,7 @@ class Parser:
|
||||
self._inidict[name] = (help, type, default)
|
||||
self._ininames.append(name)
|
||||
|
||||
|
||||
class OptionGroup:
|
||||
def __init__(self, name, description="", parser=None):
|
||||
self.name = name
|
||||
@@ -149,20 +151,24 @@ class Conftest(object):
|
||||
p = current.join(opt1[len(opt)+1:], abs=1)
|
||||
self._confcutdir = p
|
||||
break
|
||||
for arg in args + [current]:
|
||||
foundanchor = False
|
||||
for arg in args:
|
||||
if hasattr(arg, 'startswith') and arg.startswith("--"):
|
||||
continue
|
||||
anchor = current.join(arg, abs=1)
|
||||
if anchor.check(): # we found some file object
|
||||
self._path2confmods[None] = self.getconftestmodules(anchor)
|
||||
# let's also consider test* dirs
|
||||
if anchor.check(dir=1):
|
||||
for x in anchor.listdir("test*"):
|
||||
if x.check(dir=1):
|
||||
self.getconftestmodules(x)
|
||||
break
|
||||
else:
|
||||
assert 0, "no root of filesystem?"
|
||||
self._try_load_conftest(anchor)
|
||||
foundanchor = True
|
||||
if not foundanchor:
|
||||
self._try_load_conftest(current)
|
||||
|
||||
def _try_load_conftest(self, anchor):
|
||||
self._path2confmods[None] = self.getconftestmodules(anchor)
|
||||
# let's also consider test* subdirs
|
||||
if anchor.check(dir=1):
|
||||
for x in anchor.listdir("test*"):
|
||||
if x.check(dir=1):
|
||||
self.getconftestmodules(x)
|
||||
|
||||
def getconftestmodules(self, path):
|
||||
""" return a list of imported conftest modules for the given path. """
|
||||
@@ -254,11 +260,14 @@ class Config(object):
|
||||
self.hook = self.pluginmanager.hook
|
||||
self._inicache = {}
|
||||
self._cleanup = []
|
||||
|
||||
|
||||
@classmethod
|
||||
def fromdictargs(cls, option_dict, args):
|
||||
""" constructor useable for subprocesses. """
|
||||
config = cls()
|
||||
# XXX slightly crude way to initialize capturing
|
||||
import _pytest.capture
|
||||
_pytest.capture.pytest_cmdline_parse(config.pluginmanager, args)
|
||||
config._preparse(args, addopts=False)
|
||||
config.option.__dict__.update(option_dict)
|
||||
for x in config.option.plugins:
|
||||
@@ -283,11 +292,10 @@ class Config(object):
|
||||
|
||||
def _setinitialconftest(self, args):
|
||||
# capture output during conftest init (#issue93)
|
||||
from _pytest.capture import CaptureManager
|
||||
capman = CaptureManager()
|
||||
self.pluginmanager.register(capman, 'capturemanager')
|
||||
# will be unregistered in capture.py's unconfigure()
|
||||
capman.resumecapture(capman._getmethod_preoptionparse(args))
|
||||
# XXX introduce load_conftest hook to avoid needing to know
|
||||
# about capturing plugin here
|
||||
capman = self.pluginmanager.getplugin("capturemanager")
|
||||
capman.resumecapture()
|
||||
try:
|
||||
try:
|
||||
self._conftest.setinitial(args)
|
||||
@@ -340,6 +348,14 @@ class Config(object):
|
||||
args.append(py.std.os.getcwd())
|
||||
self.args = args
|
||||
|
||||
def addinivalue_line(self, name, line):
|
||||
""" add a line to an ini-file option. The option must have been
|
||||
declared but might not yet be set in which case the line becomes the
|
||||
the first line in its value. """
|
||||
x = self.getini(name)
|
||||
assert isinstance(x, list)
|
||||
x.append(line) # modifies the cached list inline
|
||||
|
||||
def getini(self, name):
|
||||
""" return configuration value from an ini file. If the
|
||||
specified name hasn't been registered through a prior ``parse.addini``
|
||||
|
||||
@@ -79,10 +79,11 @@ class PluginManager(object):
|
||||
self.import_plugin(spec)
|
||||
|
||||
def register(self, plugin, name=None, prepend=False):
|
||||
assert not self.isregistered(plugin), plugin
|
||||
if self._name2plugin.get(name, None) == -1:
|
||||
return
|
||||
name = name or getattr(plugin, '__name__', str(id(plugin)))
|
||||
if name in self._name2plugin:
|
||||
return False
|
||||
if self.isregistered(plugin, name):
|
||||
raise ValueError("Plugin already registered: %s=%s" %(name, plugin))
|
||||
#self.trace("registering", name, plugin)
|
||||
self._name2plugin[name] = plugin
|
||||
self.call_plugin(plugin, "pytest_addhooks", {'pluginmanager': self})
|
||||
@@ -211,6 +212,14 @@ class PluginManager(object):
|
||||
self.register(mod, modname)
|
||||
self.consider_module(mod)
|
||||
|
||||
def pytest_configure(self, config):
|
||||
config.addinivalue_line("markers",
|
||||
"tryfirst: mark a hook implementation function such that the "
|
||||
"plugin machinery will try to call it first/as early as possible.")
|
||||
config.addinivalue_line("markers",
|
||||
"trylast: mark a hook implementation function such that the "
|
||||
"plugin machinery will try to call it last/as late as possible.")
|
||||
|
||||
def pytest_plugin_registered(self, plugin):
|
||||
import pytest
|
||||
dic = self.call_plugin(plugin, "pytest_namespace", {}) or {}
|
||||
@@ -313,13 +322,15 @@ def importplugin(importspec):
|
||||
name = importspec
|
||||
try:
|
||||
mod = "_pytest." + name
|
||||
return __import__(mod, None, None, '__doc__')
|
||||
__import__(mod)
|
||||
return sys.modules[mod]
|
||||
except ImportError:
|
||||
#e = py.std.sys.exc_info()[1]
|
||||
#if str(e).find(name) == -1:
|
||||
# raise
|
||||
pass #
|
||||
return __import__(importspec, None, None, '__doc__')
|
||||
__import__(importspec)
|
||||
return sys.modules[importspec]
|
||||
|
||||
class MultiCall:
|
||||
""" execute a call into multiple python functions/methods. """
|
||||
@@ -431,10 +442,7 @@ _preinit = []
|
||||
def _preloadplugins():
|
||||
_preinit.append(PluginManager(load=True))
|
||||
|
||||
def main(args=None, plugins=None):
|
||||
""" returned exit code integer, after an in-process testing run
|
||||
with the given command line arguments, preloading an optional list
|
||||
of passed in plugin objects. """
|
||||
def _prepareconfig(args=None, plugins=None):
|
||||
if args is None:
|
||||
args = sys.argv[1:]
|
||||
elif isinstance(args, py.path.local):
|
||||
@@ -448,17 +456,22 @@ def main(args=None, plugins=None):
|
||||
else: # subsequent calls to main will create a fresh instance
|
||||
_pluginmanager = PluginManager(load=True)
|
||||
hook = _pluginmanager.hook
|
||||
try:
|
||||
if plugins:
|
||||
for plugin in plugins:
|
||||
_pluginmanager.register(plugin)
|
||||
config = hook.pytest_cmdline_parse(
|
||||
pluginmanager=_pluginmanager, args=args)
|
||||
exitstatus = hook.pytest_cmdline_main(config=config)
|
||||
except UsageError:
|
||||
e = sys.exc_info()[1]
|
||||
sys.stderr.write("ERROR: %s\n" %(e.args[0],))
|
||||
exitstatus = 3
|
||||
if plugins:
|
||||
for plugin in plugins:
|
||||
_pluginmanager.register(plugin)
|
||||
return hook.pytest_cmdline_parse(
|
||||
pluginmanager=_pluginmanager, args=args)
|
||||
|
||||
def main(args=None, plugins=None):
|
||||
""" return exit code, after performing an in-process test run.
|
||||
|
||||
:arg args: list of command line arguments.
|
||||
|
||||
:arg plugins: list of plugin objects to be auto-registered during
|
||||
initialization.
|
||||
"""
|
||||
config = _prepareconfig(args, plugins)
|
||||
exitstatus = config.hook.pytest_cmdline_main(config=config)
|
||||
return exitstatus
|
||||
|
||||
class UsageError(Exception):
|
||||
|
||||
@@ -56,6 +56,7 @@ def pytest_cmdline_main(config):
|
||||
elif config.option.help:
|
||||
config.pluginmanager.do_configure(config)
|
||||
showhelp(config)
|
||||
config.pluginmanager.do_unconfigure(config)
|
||||
return 0
|
||||
|
||||
def showhelp(config):
|
||||
@@ -78,6 +79,8 @@ def showhelp(config):
|
||||
|
||||
tw.line() ; tw.line()
|
||||
#tw.sep("=")
|
||||
tw.line("to see available markers type: py.test --markers")
|
||||
tw.line("to see available fixtures type: py.test --fixtures")
|
||||
return
|
||||
|
||||
tw.line("conftest.py options:")
|
||||
@@ -113,7 +116,7 @@ def pytest_report_header(config):
|
||||
verinfo = getpluginversioninfo(config)
|
||||
if verinfo:
|
||||
lines.extend(verinfo)
|
||||
|
||||
|
||||
if config.option.traceconfig:
|
||||
lines.append("active plugins:")
|
||||
plugins = []
|
||||
|
||||
@@ -121,16 +121,23 @@ def pytest_generate_tests(metafunc):
|
||||
def pytest_itemstart(item, node=None):
|
||||
""" (deprecated, use pytest_runtest_logstart). """
|
||||
|
||||
def pytest_runtest_protocol(item):
|
||||
""" implements the standard runtest_setup/call/teardown protocol including
|
||||
capturing exceptions and calling reporting hooks on the results accordingly.
|
||||
def pytest_runtest_protocol(item, nextitem):
|
||||
""" implements the runtest_setup/call/teardown protocol for
|
||||
the given test item, including capturing exceptions and calling
|
||||
reporting hooks.
|
||||
|
||||
:arg item: test item for which the runtest protocol is performed.
|
||||
|
||||
:arg nexitem: the scheduled-to-be-next test item (or None if this
|
||||
is the end my friend). This argument is passed on to
|
||||
:py:func:`pytest_runtest_teardown`.
|
||||
|
||||
:return boolean: True if no further hook implementations should be invoked.
|
||||
"""
|
||||
pytest_runtest_protocol.firstresult = True
|
||||
|
||||
def pytest_runtest_logstart(nodeid, location):
|
||||
""" signal the start of a test run. """
|
||||
""" signal the start of running a single test item. """
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
""" called before ``pytest_runtest_call(item)``. """
|
||||
@@ -138,8 +145,14 @@ def pytest_runtest_setup(item):
|
||||
def pytest_runtest_call(item):
|
||||
""" called to execute the test ``item``. """
|
||||
|
||||
def pytest_runtest_teardown(item):
|
||||
""" called after ``pytest_runtest_call``. """
|
||||
def pytest_runtest_teardown(item, nextitem):
|
||||
""" called after ``pytest_runtest_call``.
|
||||
|
||||
:arg nexitem: the scheduled-to-be-next test item (None if no further
|
||||
test item is scheduled). This argument can be used to
|
||||
perform exact teardowns, i.e. calling just enough finalizers
|
||||
so that nextitem only needs to call setup-functions.
|
||||
"""
|
||||
|
||||
def pytest_runtest_makereport(item, call):
|
||||
""" return a :py:class:`_pytest.runner.TestReport` object
|
||||
@@ -149,15 +162,8 @@ def pytest_runtest_makereport(item, call):
|
||||
pytest_runtest_makereport.firstresult = True
|
||||
|
||||
def pytest_runtest_logreport(report):
|
||||
""" process item test report. """
|
||||
|
||||
# special handling for final teardown - somewhat internal for now
|
||||
def pytest__teardown_final(session):
|
||||
""" called before test session finishes. """
|
||||
pytest__teardown_final.firstresult = True
|
||||
|
||||
def pytest__teardown_final_logerror(report, session):
|
||||
""" called if runtest_teardown_final failed. """
|
||||
""" process a test setup/call/teardown report relating to
|
||||
the respective phase of executing a test. """
|
||||
|
||||
# -------------------------------------------------------------------------
|
||||
# test session related hooks
|
||||
@@ -187,7 +193,7 @@ def pytest_assertrepr_compare(config, op, left, right):
|
||||
# hooks for influencing reporting (invoked from _pytest_terminal)
|
||||
# -------------------------------------------------------------------------
|
||||
|
||||
def pytest_report_header(config):
|
||||
def pytest_report_header(config, startdir):
|
||||
""" return a string to be displayed as header info for terminal reporting."""
|
||||
|
||||
def pytest_report_teststatus(report):
|
||||
|
||||
254
_pytest/impl
Normal file
254
_pytest/impl
Normal file
@@ -0,0 +1,254 @@
|
||||
Sorting per-resource
|
||||
-----------------------------
|
||||
|
||||
for any given set of items:
|
||||
|
||||
- collect items per session-scoped parametrized funcarg
|
||||
- re-order until items no parametrizations are mixed
|
||||
|
||||
examples:
|
||||
|
||||
test()
|
||||
test1(s1)
|
||||
test1(s2)
|
||||
test2()
|
||||
test3(s1)
|
||||
test3(s2)
|
||||
|
||||
gets sorted to:
|
||||
|
||||
test()
|
||||
test2()
|
||||
test1(s1)
|
||||
test3(s1)
|
||||
test1(s2)
|
||||
test3(s2)
|
||||
|
||||
|
||||
the new @setup functions
|
||||
--------------------------------------
|
||||
|
||||
Consider a given @setup-marked function::
|
||||
|
||||
@pytest.mark.setup(maxscope=SCOPE)
|
||||
def mysetup(request, arg1, arg2, ...)
|
||||
...
|
||||
request.addfinalizer(fin)
|
||||
...
|
||||
|
||||
then FUNCARGSET denotes the set of (arg1, arg2, ...) funcargs and
|
||||
all of its dependent funcargs. The mysetup function will execute
|
||||
for any matching test item once per scope.
|
||||
|
||||
The scope is determined as the minimum scope of all scopes of the args
|
||||
in FUNCARGSET and the given "maxscope".
|
||||
|
||||
If mysetup has been called and no finalizers have been called it is
|
||||
called "active".
|
||||
|
||||
Furthermore the following rules apply:
|
||||
|
||||
- if an arg value in FUNCARGSET is about to be torn down, the
|
||||
mysetup-registered finalizers will execute as well.
|
||||
|
||||
- There will never be two active mysetup invocations.
|
||||
|
||||
Example 1, session scope::
|
||||
|
||||
@pytest.mark.funcarg(scope="session", params=[1,2])
|
||||
def db(request):
|
||||
request.addfinalizer(db_finalize)
|
||||
|
||||
@pytest.mark.setup
|
||||
def mysetup(request, db):
|
||||
request.addfinalizer(mysetup_finalize)
|
||||
...
|
||||
|
||||
And a given test module:
|
||||
|
||||
def test_something():
|
||||
...
|
||||
def test_otherthing():
|
||||
pass
|
||||
|
||||
Here is what happens::
|
||||
|
||||
db(request) executes with request.param == 1
|
||||
mysetup(request, db) executes
|
||||
test_something() executes
|
||||
test_otherthing() executes
|
||||
mysetup_finalize() executes
|
||||
db_finalize() executes
|
||||
db(request) executes with request.param == 2
|
||||
mysetup(request, db) executes
|
||||
test_something() executes
|
||||
test_otherthing() executes
|
||||
mysetup_finalize() executes
|
||||
db_finalize() executes
|
||||
|
||||
Example 2, session/function scope::
|
||||
|
||||
@pytest.mark.funcarg(scope="session", params=[1,2])
|
||||
def db(request):
|
||||
request.addfinalizer(db_finalize)
|
||||
|
||||
@pytest.mark.setup(scope="function")
|
||||
def mysetup(request, db):
|
||||
...
|
||||
request.addfinalizer(mysetup_finalize)
|
||||
...
|
||||
|
||||
And a given test module:
|
||||
|
||||
def test_something():
|
||||
...
|
||||
def test_otherthing():
|
||||
pass
|
||||
|
||||
Here is what happens::
|
||||
|
||||
db(request) executes with request.param == 1
|
||||
mysetup(request, db) executes
|
||||
test_something() executes
|
||||
mysetup_finalize() executes
|
||||
mysetup(request, db) executes
|
||||
test_otherthing() executes
|
||||
mysetup_finalize() executes
|
||||
db_finalize() executes
|
||||
db(request) executes with request.param == 2
|
||||
mysetup(request, db) executes
|
||||
test_something() executes
|
||||
mysetup_finalize() executes
|
||||
mysetup(request, db) executes
|
||||
test_otherthing() executes
|
||||
mysetup_finalize() executes
|
||||
db_finalize() executes
|
||||
|
||||
|
||||
Example 3 - funcargs session-mix
|
||||
----------------------------------------
|
||||
|
||||
Similar with funcargs, an example::
|
||||
|
||||
@pytest.mark.funcarg(scope="session", params=[1,2])
|
||||
def db(request):
|
||||
request.addfinalizer(db_finalize)
|
||||
|
||||
@pytest.mark.funcarg(scope="function")
|
||||
def table(request, db):
|
||||
...
|
||||
request.addfinalizer(table_finalize)
|
||||
...
|
||||
|
||||
And a given test module:
|
||||
|
||||
def test_something(table):
|
||||
...
|
||||
def test_otherthing(table):
|
||||
pass
|
||||
def test_thirdthing():
|
||||
pass
|
||||
|
||||
Here is what happens::
|
||||
|
||||
db(request) executes with param == 1
|
||||
table(request, db)
|
||||
test_something(table)
|
||||
table_finalize()
|
||||
table(request, db)
|
||||
test_otherthing(table)
|
||||
table_finalize()
|
||||
db_finalize
|
||||
db(request) executes with param == 2
|
||||
table(request, db)
|
||||
test_something(table)
|
||||
table_finalize()
|
||||
table(request, db)
|
||||
test_otherthing(table)
|
||||
table_finalize()
|
||||
db_finalize
|
||||
test_thirdthing()
|
||||
|
||||
Data structures
|
||||
--------------------
|
||||
|
||||
pytest internally maintains a dict of active funcargs with cache, param,
|
||||
finalizer, (scopeitem?) information:
|
||||
|
||||
active_funcargs = dict()
|
||||
|
||||
if a parametrized "db" is activated:
|
||||
|
||||
active_funcargs["db"] = FuncargInfo(dbvalue, paramindex,
|
||||
FuncargFinalize(...), scopeitem)
|
||||
|
||||
if a test is torn down and the next test requires a differently
|
||||
parametrized "db":
|
||||
|
||||
for argname in item.callspec.params:
|
||||
if argname in active_funcargs:
|
||||
funcarginfo = active_funcargs[argname]
|
||||
if funcarginfo.param != item.callspec.params[argname]:
|
||||
funcarginfo.callfinalizer()
|
||||
del node2funcarg[funcarginfo.scopeitem]
|
||||
del active_funcargs[argname]
|
||||
nodes_to_be_torn_down = ...
|
||||
for node in nodes_to_be_torn_down:
|
||||
if node in node2funcarg:
|
||||
argname = node2funcarg[node]
|
||||
active_funcargs[argname].callfinalizer()
|
||||
del node2funcarg[node]
|
||||
del active_funcargs[argname]
|
||||
|
||||
if a test is setup requiring a "db" funcarg:
|
||||
|
||||
if "db" in active_funcargs:
|
||||
return active_funcargs["db"][0]
|
||||
funcarginfo = setup_funcarg()
|
||||
active_funcargs["db"] = funcarginfo
|
||||
node2funcarg[funcarginfo.scopeitem] = "db"
|
||||
|
||||
Implementation plan for resources
|
||||
------------------------------------------
|
||||
|
||||
1. Revert FuncargRequest to the old form, unmerge item/request
|
||||
(done)
|
||||
2. make funcarg factories be discovered at collection time
|
||||
3. Introduce funcarg marker
|
||||
4. Introduce funcarg scope parameter
|
||||
5. Introduce funcarg parametrize parameter
|
||||
6. make setup functions be discovered at collection time
|
||||
7. (Introduce a pytest_fixture_protocol/setup_funcargs hook)
|
||||
|
||||
methods and data structures
|
||||
--------------------------------
|
||||
|
||||
A FuncarcManager holds all information about funcarg definitions
|
||||
including parametrization and scope definitions. It implements
|
||||
a pytest_generate_tests hook which performs parametrization as appropriate.
|
||||
|
||||
as a simple example, let's consider a tree where a test function requires
|
||||
a "abc" funcarg and its factory defines it as parametrized and scoped
|
||||
for Modules. When collections hits the function item, it creates
|
||||
the metafunc object, and calls funcargdb.pytest_generate_tests(metafunc)
|
||||
which looks up available funcarg factories and their scope and parametrization.
|
||||
This information is equivalent to what can be provided today directly
|
||||
at the function site and it should thus be relatively straight forward
|
||||
to implement the additional way of defining parametrization/scoping.
|
||||
|
||||
conftest loading:
|
||||
each funcarg-factory will populate the session.funcargmanager
|
||||
|
||||
When a test item is collected, it grows a dictionary
|
||||
(funcargname2factorycalllist). A factory lookup is performed
|
||||
for each required funcarg. The resulting factory call is stored
|
||||
with the item. If a function is parametrized multiple items are
|
||||
created with respective factory calls. Else if a factory is parametrized
|
||||
multiple items and calls to the factory function are created as well.
|
||||
|
||||
At setup time, an item populates a funcargs mapping, mapping names
|
||||
to values. If a value is funcarg factories are queried for a given item
|
||||
test functions and setup functions are put in a class
|
||||
which looks up required funcarg factories.
|
||||
|
||||
|
||||
@@ -25,21 +25,39 @@ except NameError:
|
||||
long = int
|
||||
|
||||
|
||||
class Junit(py.xml.Namespace):
|
||||
pass
|
||||
|
||||
|
||||
# We need to get the subset of the invalid unicode ranges according to
|
||||
# XML 1.0 which are valid in this python build. Hence we calculate
|
||||
# this dynamically instead of hardcoding it. The spec range of valid
|
||||
# chars is: Char ::= #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD]
|
||||
# | [#x10000-#x10FFFF]
|
||||
_illegal_unichrs = [(0x00, 0x08), (0x0B, 0x0C), (0x0E, 0x19),
|
||||
(0xD800, 0xDFFF), (0xFDD0, 0xFFFF)]
|
||||
_illegal_ranges = [unicode("%s-%s") % (unichr(low), unichr(high))
|
||||
for (low, high) in _illegal_unichrs
|
||||
_legal_chars = (0x09, 0x0A, 0x0d)
|
||||
_legal_ranges = (
|
||||
(0x20, 0xD7FF),
|
||||
(0xE000, 0xFFFD),
|
||||
(0x10000, 0x10FFFF),
|
||||
)
|
||||
_legal_xml_re = [unicode("%s-%s") % (unichr(low), unichr(high))
|
||||
for (low, high) in _legal_ranges
|
||||
if low < sys.maxunicode]
|
||||
illegal_xml_re = re.compile(unicode('[%s]') %
|
||||
unicode('').join(_illegal_ranges))
|
||||
del _illegal_unichrs
|
||||
del _illegal_ranges
|
||||
_legal_xml_re = [unichr(x) for x in _legal_chars] + _legal_xml_re
|
||||
illegal_xml_re = re.compile(unicode('[^%s]') %
|
||||
unicode('').join(_legal_xml_re))
|
||||
del _legal_chars
|
||||
del _legal_ranges
|
||||
del _legal_xml_re
|
||||
|
||||
def bin_xml_escape(arg):
|
||||
def repl(matchobj):
|
||||
i = ord(matchobj.group())
|
||||
if i <= 0xFF:
|
||||
return unicode('#x%02X') % i
|
||||
else:
|
||||
return unicode('#x%04X') % i
|
||||
return py.xml.raw(illegal_xml_re.sub(repl, py.xml.escape(arg)))
|
||||
|
||||
def pytest_addoption(parser):
|
||||
group = parser.getgroup("terminal reporting")
|
||||
@@ -63,120 +81,105 @@ def pytest_unconfigure(config):
|
||||
config.pluginmanager.unregister(xml)
|
||||
|
||||
|
||||
def mangle_testnames(names):
|
||||
names = [x.replace(".py", "") for x in names if x != '()']
|
||||
names[0] = names[0].replace("/", '.')
|
||||
return names
|
||||
|
||||
class LogXML(object):
|
||||
def __init__(self, logfile, prefix):
|
||||
logfile = os.path.expanduser(os.path.expandvars(logfile))
|
||||
self.logfile = os.path.normpath(logfile)
|
||||
self.logfile = os.path.normpath(os.path.abspath(logfile))
|
||||
self.prefix = prefix
|
||||
self.test_logs = []
|
||||
self.tests = []
|
||||
self.passed = self.skipped = 0
|
||||
self.failed = self.errors = 0
|
||||
|
||||
def _opentestcase(self, report):
|
||||
names = report.nodeid.split("::")
|
||||
names[0] = names[0].replace("/", '.')
|
||||
names = tuple(names)
|
||||
d = {'time': getattr(report, 'duration', 0)}
|
||||
names = [x.replace(".py", "") for x in names if x != "()"]
|
||||
names = mangle_testnames(report.nodeid.split("::"))
|
||||
classnames = names[:-1]
|
||||
if self.prefix:
|
||||
classnames.insert(0, self.prefix)
|
||||
d['classname'] = ".".join(classnames)
|
||||
d['name'] = py.xml.escape(names[-1])
|
||||
attrs = ['%s="%s"' % item for item in sorted(d.items())]
|
||||
self.test_logs.append("\n<testcase %s>" % " ".join(attrs))
|
||||
self.tests.append(Junit.testcase(
|
||||
classname=".".join(classnames),
|
||||
name=names[-1],
|
||||
time=getattr(report, 'duration', 0)
|
||||
))
|
||||
|
||||
def _closetestcase(self):
|
||||
self.test_logs.append("</testcase>")
|
||||
|
||||
def appendlog(self, fmt, *args):
|
||||
def repl(matchobj):
|
||||
i = ord(matchobj.group())
|
||||
if i <= 0xFF:
|
||||
return unicode('#x%02X') % i
|
||||
else:
|
||||
return unicode('#x%04X') % i
|
||||
args = tuple([illegal_xml_re.sub(repl, py.xml.escape(arg))
|
||||
for arg in args])
|
||||
self.test_logs.append(fmt % args)
|
||||
def append(self, obj):
|
||||
self.tests[-1].append(obj)
|
||||
|
||||
def append_pass(self, report):
|
||||
self.passed += 1
|
||||
self._opentestcase(report)
|
||||
self._closetestcase()
|
||||
|
||||
def append_failure(self, report):
|
||||
self._opentestcase(report)
|
||||
#msg = str(report.longrepr.reprtraceback.extraline)
|
||||
if "xfail" in report.keywords:
|
||||
self.appendlog(
|
||||
'<skipped message="xfail-marked test passes unexpectedly"/>')
|
||||
if hasattr(report, "wasxfail"):
|
||||
self.append(
|
||||
Junit.skipped(message="xfail-marked test passes unexpectedly"))
|
||||
self.skipped += 1
|
||||
else:
|
||||
sec = dict(report.sections)
|
||||
self.appendlog('<failure message="test failure">%s</failure>',
|
||||
report.longrepr)
|
||||
fail = Junit.failure(message="test failure")
|
||||
fail.append(str(report.longrepr))
|
||||
self.append(fail)
|
||||
for name in ('out', 'err'):
|
||||
content = sec.get("Captured std%s" % name)
|
||||
if content:
|
||||
self.appendlog(
|
||||
"<system-%s>%%s</system-%s>" % (name, name), content)
|
||||
tag = getattr(Junit, 'system-'+name)
|
||||
self.append(tag(bin_xml_escape(content)))
|
||||
self.failed += 1
|
||||
self._closetestcase()
|
||||
|
||||
def append_collect_failure(self, report):
|
||||
self._opentestcase(report)
|
||||
#msg = str(report.longrepr.reprtraceback.extraline)
|
||||
self.appendlog('<failure message="collection failure">%s</failure>',
|
||||
report.longrepr)
|
||||
self._closetestcase()
|
||||
self.append(Junit.failure(str(report.longrepr),
|
||||
message="collection failure"))
|
||||
self.errors += 1
|
||||
|
||||
def append_collect_skipped(self, report):
|
||||
self._opentestcase(report)
|
||||
#msg = str(report.longrepr.reprtraceback.extraline)
|
||||
self.appendlog('<skipped message="collection skipped">%s</skipped>',
|
||||
report.longrepr)
|
||||
self._closetestcase()
|
||||
self.append(Junit.skipped(str(report.longrepr),
|
||||
message="collection skipped"))
|
||||
self.skipped += 1
|
||||
|
||||
def append_error(self, report):
|
||||
self._opentestcase(report)
|
||||
self.appendlog('<error message="test setup failure">%s</error>',
|
||||
report.longrepr)
|
||||
self._closetestcase()
|
||||
self.append(Junit.error(str(report.longrepr),
|
||||
message="test setup failure"))
|
||||
self.errors += 1
|
||||
|
||||
def append_skipped(self, report):
|
||||
self._opentestcase(report)
|
||||
if "xfail" in report.keywords:
|
||||
self.appendlog(
|
||||
'<skipped message="expected test failure">%s</skipped>',
|
||||
report.keywords['xfail'])
|
||||
if hasattr(report, "wasxfail"):
|
||||
self.append(Junit.skipped(str(report.wasxfail),
|
||||
message="expected test failure"))
|
||||
else:
|
||||
filename, lineno, skipreason = report.longrepr
|
||||
if skipreason.startswith("Skipped: "):
|
||||
skipreason = skipreason[9:]
|
||||
self.appendlog('<skipped type="pytest.skip" '
|
||||
'message="%s">%s</skipped>',
|
||||
skipreason, "%s:%s: %s" % report.longrepr,
|
||||
)
|
||||
self._closetestcase()
|
||||
self.append(
|
||||
Junit.skipped("%s:%s: %s" % report.longrepr,
|
||||
type="pytest.skip",
|
||||
message=skipreason
|
||||
))
|
||||
self.skipped += 1
|
||||
|
||||
def pytest_runtest_logreport(self, report):
|
||||
if report.passed:
|
||||
self.append_pass(report)
|
||||
if report.when == "call": # ignore setup/teardown
|
||||
self._opentestcase(report)
|
||||
self.append_pass(report)
|
||||
elif report.failed:
|
||||
self._opentestcase(report)
|
||||
if report.when != "call":
|
||||
self.append_error(report)
|
||||
else:
|
||||
self.append_failure(report)
|
||||
elif report.skipped:
|
||||
self._opentestcase(report)
|
||||
self.append_skipped(report)
|
||||
|
||||
def pytest_collectreport(self, report):
|
||||
if not report.passed:
|
||||
self._opentestcase(report)
|
||||
if report.failed:
|
||||
self.append_collect_failure(report)
|
||||
else:
|
||||
@@ -185,10 +188,11 @@ class LogXML(object):
|
||||
def pytest_internalerror(self, excrepr):
|
||||
self.errors += 1
|
||||
data = py.xml.escape(excrepr)
|
||||
self.test_logs.append(
|
||||
'\n<testcase classname="pytest" name="internal">'
|
||||
' <error message="internal error">'
|
||||
'%s</error></testcase>' % data)
|
||||
self.tests.append(
|
||||
Junit.testcase(
|
||||
Junit.error(data, message="internal error"),
|
||||
classname="pytest",
|
||||
name="internal"))
|
||||
|
||||
def pytest_sessionstart(self, session):
|
||||
self.suite_start_time = time.time()
|
||||
@@ -202,17 +206,17 @@ class LogXML(object):
|
||||
suite_stop_time = time.time()
|
||||
suite_time_delta = suite_stop_time - self.suite_start_time
|
||||
numtests = self.passed + self.failed
|
||||
|
||||
logfile.write('<?xml version="1.0" encoding="utf-8"?>')
|
||||
logfile.write('<testsuite ')
|
||||
logfile.write('name="" ')
|
||||
logfile.write('errors="%i" ' % self.errors)
|
||||
logfile.write('failures="%i" ' % self.failed)
|
||||
logfile.write('skips="%i" ' % self.skipped)
|
||||
logfile.write('tests="%i" ' % numtests)
|
||||
logfile.write('time="%.3f"' % suite_time_delta)
|
||||
logfile.write(' >')
|
||||
logfile.writelines(self.test_logs)
|
||||
logfile.write('</testsuite>')
|
||||
logfile.write(Junit.testsuite(
|
||||
self.tests,
|
||||
name="",
|
||||
errors=self.errors,
|
||||
failures=self.failed,
|
||||
skips=self.skipped,
|
||||
tests=numtests,
|
||||
time="%.3f" % suite_time_delta,
|
||||
).unicode(indent=0))
|
||||
logfile.close()
|
||||
|
||||
def pytest_terminal_summary(self, terminalreporter):
|
||||
|
||||
208
_pytest/main.py
208
_pytest/main.py
@@ -2,7 +2,15 @@
|
||||
|
||||
import py
|
||||
import pytest, _pytest
|
||||
import inspect
|
||||
import os, sys, imp
|
||||
try:
|
||||
from collections import MutableMapping as MappingMixin
|
||||
except ImportError:
|
||||
from UserDict import DictMixin as MappingMixin
|
||||
|
||||
from _pytest.mark import MarkInfo
|
||||
|
||||
tracebackcutdir = py.path.local(_pytest.__file__).dirpath()
|
||||
|
||||
# exitcodes for the command line
|
||||
@@ -10,6 +18,9 @@ EXIT_OK = 0
|
||||
EXIT_TESTSFAILED = 1
|
||||
EXIT_INTERRUPTED = 2
|
||||
EXIT_INTERNALERROR = 3
|
||||
EXIT_USAGEERROR = 4
|
||||
|
||||
name_re = py.std.re.compile("^[a-zA-Z_]\w*$")
|
||||
|
||||
def pytest_addoption(parser):
|
||||
parser.addini("norecursedirs", "directory patterns to avoid for recursion",
|
||||
@@ -26,6 +37,8 @@ def pytest_addoption(parser):
|
||||
group._addoption('--maxfail', metavar="num",
|
||||
action="store", type="int", dest="maxfail", default=0,
|
||||
help="exit after first num failures or errors.")
|
||||
group._addoption('--strict', action="store_true",
|
||||
help="run pytest in strict mode, warnings become errors.")
|
||||
|
||||
group = parser.getgroup("collect", "collection")
|
||||
group.addoption('--collectonly',
|
||||
@@ -48,7 +61,7 @@ def pytest_addoption(parser):
|
||||
def pytest_namespace():
|
||||
collect = dict(Item=Item, Collector=Collector, File=File, Session=Session)
|
||||
return dict(collect=collect)
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
py.test.config = config # compatibiltiy
|
||||
if config.option.exitfirst:
|
||||
@@ -60,30 +73,34 @@ def wrap_session(config, doit):
|
||||
session.exitstatus = EXIT_OK
|
||||
initstate = 0
|
||||
try:
|
||||
config.pluginmanager.do_configure(config)
|
||||
initstate = 1
|
||||
config.hook.pytest_sessionstart(session=session)
|
||||
initstate = 2
|
||||
doit(config, session)
|
||||
except pytest.UsageError:
|
||||
raise
|
||||
except KeyboardInterrupt:
|
||||
excinfo = py.code.ExceptionInfo()
|
||||
config.hook.pytest_keyboard_interrupt(excinfo=excinfo)
|
||||
session.exitstatus = EXIT_INTERRUPTED
|
||||
except:
|
||||
excinfo = py.code.ExceptionInfo()
|
||||
config.pluginmanager.notify_exception(excinfo, config.option)
|
||||
session.exitstatus = EXIT_INTERNALERROR
|
||||
if excinfo.errisinstance(SystemExit):
|
||||
sys.stderr.write("mainloop: caught Spurious SystemExit!\n")
|
||||
if not session.exitstatus and session._testsfailed:
|
||||
session.exitstatus = EXIT_TESTSFAILED
|
||||
if initstate >= 2:
|
||||
config.hook.pytest_sessionfinish(session=session,
|
||||
exitstatus=session.exitstatus)
|
||||
if initstate >= 1:
|
||||
config.pluginmanager.do_unconfigure(config)
|
||||
try:
|
||||
config.pluginmanager.do_configure(config)
|
||||
initstate = 1
|
||||
config.hook.pytest_sessionstart(session=session)
|
||||
initstate = 2
|
||||
doit(config, session)
|
||||
except pytest.UsageError:
|
||||
msg = sys.exc_info()[1].args[0]
|
||||
sys.stderr.write("ERROR: %s\n" %(msg,))
|
||||
session.exitstatus = EXIT_USAGEERROR
|
||||
except KeyboardInterrupt:
|
||||
excinfo = py.code.ExceptionInfo()
|
||||
config.hook.pytest_keyboard_interrupt(excinfo=excinfo)
|
||||
session.exitstatus = EXIT_INTERRUPTED
|
||||
except:
|
||||
excinfo = py.code.ExceptionInfo()
|
||||
config.pluginmanager.notify_exception(excinfo, config.option)
|
||||
session.exitstatus = EXIT_INTERNALERROR
|
||||
if excinfo.errisinstance(SystemExit):
|
||||
sys.stderr.write("mainloop: caught Spurious SystemExit!\n")
|
||||
finally:
|
||||
if initstate >= 2:
|
||||
config.hook.pytest_sessionfinish(session=session,
|
||||
exitstatus=session.exitstatus or (session._testsfailed and 1))
|
||||
if not session.exitstatus and session._testsfailed:
|
||||
session.exitstatus = EXIT_TESTSFAILED
|
||||
if initstate >= 1:
|
||||
config.pluginmanager.do_unconfigure(config)
|
||||
return session.exitstatus
|
||||
|
||||
def pytest_cmdline_main(config):
|
||||
@@ -101,8 +118,19 @@ def pytest_collection(session):
|
||||
def pytest_runtestloop(session):
|
||||
if session.config.option.collectonly:
|
||||
return True
|
||||
for item in session.session.items:
|
||||
item.config.hook.pytest_runtest_protocol(item=item)
|
||||
|
||||
def getnextitem(i):
|
||||
# this is a function to avoid python2
|
||||
# keeping sys.exc_info set when calling into a test
|
||||
# python2 keeps sys.exc_info till the frame is left
|
||||
try:
|
||||
return session.items[i+1]
|
||||
except IndexError:
|
||||
return None
|
||||
|
||||
for i, item in enumerate(session.items):
|
||||
nextitem = getnextitem(i)
|
||||
item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
|
||||
if session.shouldstop:
|
||||
raise session.Interrupted(session.shouldstop)
|
||||
return True
|
||||
@@ -120,8 +148,10 @@ class HookProxy:
|
||||
def __init__(self, fspath, config):
|
||||
self.fspath = fspath
|
||||
self.config = config
|
||||
|
||||
def __getattr__(self, name):
|
||||
hookmethod = getattr(self.config.hook, name)
|
||||
|
||||
def call_matching_hooks(**kwargs):
|
||||
plugins = self.config._getmatchingplugins(self.fspath)
|
||||
return hookmethod.pcall(plugins, **kwargs)
|
||||
@@ -129,31 +159,69 @@ class HookProxy:
|
||||
|
||||
def compatproperty(name):
|
||||
def fget(self):
|
||||
# deprecated - use pytest.name
|
||||
return getattr(pytest, name)
|
||||
return property(fget, None, None,
|
||||
"deprecated attribute %r, use pytest.%s" % (name,name))
|
||||
|
||||
|
||||
return property(fget)
|
||||
|
||||
class NodeKeywords(MappingMixin):
|
||||
def __init__(self, node):
|
||||
parent = node.parent
|
||||
bases = parent and (parent.keywords._markers,) or ()
|
||||
self._markers = type("dynmarker", bases, {node.name: True})
|
||||
|
||||
def __getitem__(self, key):
|
||||
try:
|
||||
return getattr(self._markers, key)
|
||||
except AttributeError:
|
||||
raise KeyError(key)
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
setattr(self._markers, key, value)
|
||||
|
||||
def __delitem__(self, key):
|
||||
delattr(self._markers, key)
|
||||
|
||||
def __iter__(self):
|
||||
return iter(self.keys())
|
||||
|
||||
def __len__(self):
|
||||
return len(self.keys())
|
||||
|
||||
def keys(self):
|
||||
return dir(self._markers)
|
||||
|
||||
class Node(object):
|
||||
""" base class for all Nodes in the collection tree.
|
||||
""" base class for Collector and Item the test collection tree.
|
||||
Collector subclasses have children, Items are terminal nodes."""
|
||||
|
||||
def __init__(self, name, parent=None, config=None, session=None):
|
||||
#: a unique name with the scope of the parent
|
||||
#: a unique name within the scope of the parent node
|
||||
self.name = name
|
||||
|
||||
#: the parent collector node.
|
||||
self.parent = parent
|
||||
|
||||
#: the test config object
|
||||
|
||||
#: the pytest config object
|
||||
self.config = config or parent.config
|
||||
|
||||
#: the collection this node is part of
|
||||
#: the session this node is part of
|
||||
self.session = session or parent.session
|
||||
|
||||
#: filesystem path where this node was collected from
|
||||
|
||||
#: filesystem path where this node was collected from (can be None)
|
||||
self.fspath = getattr(parent, 'fspath', None)
|
||||
|
||||
#: fspath sensitive hook proxy used to call pytest hooks
|
||||
self.ihook = self.session.gethookproxy(self.fspath)
|
||||
self.keywords = {self.name: True}
|
||||
|
||||
#: keywords/markers collected from all scopes
|
||||
self.keywords = NodeKeywords(self)
|
||||
|
||||
#self.extrainit()
|
||||
|
||||
#def extrainit(self):
|
||||
# """"extra initialization after Node is initialized. Implemented
|
||||
# by some subclasses. """
|
||||
|
||||
Module = compatproperty("Module")
|
||||
Class = compatproperty("Class")
|
||||
@@ -171,25 +239,28 @@ class Node(object):
|
||||
return cls
|
||||
|
||||
def __repr__(self):
|
||||
return "<%s %r>" %(self.__class__.__name__, getattr(self, 'name', None))
|
||||
return "<%s %r>" %(self.__class__.__name__,
|
||||
getattr(self, 'name', None))
|
||||
|
||||
# methods for ordering nodes
|
||||
@property
|
||||
def nodeid(self):
|
||||
""" a ::-separated string denoting its collection tree address. """
|
||||
try:
|
||||
return self._nodeid
|
||||
except AttributeError:
|
||||
self._nodeid = x = self._makeid()
|
||||
return x
|
||||
|
||||
|
||||
def _makeid(self):
|
||||
return self.parent.nodeid + "::" + self.name
|
||||
|
||||
def __eq__(self, other):
|
||||
if not isinstance(other, Node):
|
||||
return False
|
||||
return self.__class__ == other.__class__ and \
|
||||
self.name == other.name and self.parent == other.parent
|
||||
return (self.__class__ == other.__class__ and
|
||||
self.name == other.name and self.parent == other.parent)
|
||||
|
||||
def __ne__(self, other):
|
||||
return not self == other
|
||||
@@ -224,13 +295,13 @@ class Node(object):
|
||||
def listchain(self):
|
||||
""" return list of all parent collectors up to self,
|
||||
starting from root of collection tree. """
|
||||
l = [self]
|
||||
while 1:
|
||||
x = l[0]
|
||||
if x.parent is not None: # and x.parent.parent is not None:
|
||||
l.insert(0, x.parent)
|
||||
else:
|
||||
return l
|
||||
chain = []
|
||||
item = self
|
||||
while item is not None:
|
||||
chain.append(item)
|
||||
item = item.parent
|
||||
chain.reverse()
|
||||
return chain
|
||||
|
||||
def listnames(self):
|
||||
return [x.name for x in self.listchain()]
|
||||
@@ -248,6 +319,9 @@ class Node(object):
|
||||
pass
|
||||
|
||||
def _repr_failure_py(self, excinfo, style=None):
|
||||
fm = self.session._fixturemanager
|
||||
if excinfo.errisinstance(fm.FixtureLookupError):
|
||||
return excinfo.value.formatrepr()
|
||||
if self.config.option.fulltrace:
|
||||
style="long"
|
||||
else:
|
||||
@@ -325,6 +399,8 @@ class Item(Node):
|
||||
""" a basic test invocation item. Note that for a single function
|
||||
there might be multiple test invocation items.
|
||||
"""
|
||||
nextitem = None
|
||||
|
||||
def reportinfo(self):
|
||||
return self.fspath, None, ""
|
||||
|
||||
@@ -354,9 +430,9 @@ class Session(FSCollector):
|
||||
__module__ = 'builtins' # for py3
|
||||
|
||||
def __init__(self, config):
|
||||
super(Session, self).__init__(py.path.local(), parent=None,
|
||||
config=config, session=self)
|
||||
assert self.config.pluginmanager.register(self, name="session", prepend=True)
|
||||
FSCollector.__init__(self, py.path.local(), parent=None,
|
||||
config=config, session=self)
|
||||
self.config.pluginmanager.register(self, name="session", prepend=True)
|
||||
self._testsfailed = 0
|
||||
self.shouldstop = False
|
||||
self.trace = config.trace.root.get("collection")
|
||||
@@ -367,7 +443,7 @@ class Session(FSCollector):
|
||||
raise self.Interrupted(self.shouldstop)
|
||||
|
||||
def pytest_runtest_logreport(self, report):
|
||||
if report.failed and 'xfail' not in getattr(report, 'keywords', []):
|
||||
if report.failed and not hasattr(report, 'wasxfail'):
|
||||
self._testsfailed += 1
|
||||
maxfail = self.config.getvalue("maxfail")
|
||||
if maxfail and self._testsfailed >= maxfail:
|
||||
@@ -399,6 +475,7 @@ class Session(FSCollector):
|
||||
self._notfound = []
|
||||
self._initialpaths = set()
|
||||
self._initialparts = []
|
||||
self.items = items = []
|
||||
for arg in args:
|
||||
parts = self._parsearg(arg)
|
||||
self._initialparts.append(parts)
|
||||
@@ -414,7 +491,6 @@ class Session(FSCollector):
|
||||
if not genitems:
|
||||
return rep.result
|
||||
else:
|
||||
self.items = items = []
|
||||
if rep.passed:
|
||||
for node in rep.result:
|
||||
self.items.extend(self.genitems(node))
|
||||
@@ -442,7 +518,7 @@ class Session(FSCollector):
|
||||
if path.check(dir=1):
|
||||
assert not names, "invalid arg %r" %(arg,)
|
||||
for path in path.visit(fil=lambda x: x.check(file=1),
|
||||
rec=self._recurse, bf=True, sort=True):
|
||||
rec=self._recurse, bf=True, sort=True):
|
||||
for x in self._collectfile(path):
|
||||
yield x
|
||||
else:
|
||||
@@ -454,13 +530,13 @@ class Session(FSCollector):
|
||||
ihook = self.gethookproxy(path)
|
||||
if not self.isinitpath(path):
|
||||
if ihook.pytest_ignore_collect(path=path, config=self.config):
|
||||
return ()
|
||||
return ()
|
||||
return ihook.pytest_collect_file(path=path, parent=self)
|
||||
|
||||
def _recurse(self, path):
|
||||
ihook = self.gethookproxy(path.dirpath())
|
||||
if ihook.pytest_ignore_collect(path=path, config=self.config):
|
||||
return
|
||||
return
|
||||
for pat in self._norecursepatterns:
|
||||
if path.check(fnmatch=pat):
|
||||
return False
|
||||
@@ -472,6 +548,13 @@ class Session(FSCollector):
|
||||
mod = None
|
||||
path = [os.path.abspath('.')] + sys.path
|
||||
for name in x.split('.'):
|
||||
# ignore anything that's not a proper name here
|
||||
# else something like --pyargs will mess up '.'
|
||||
# since imp.find_module will actually sometimes work for it
|
||||
# but it's supposed to be considered a filesystem path
|
||||
# not a package
|
||||
if name_re.match(name) is None:
|
||||
return x
|
||||
try:
|
||||
fd, mod, type_ = imp.find_module(name, path)
|
||||
except ImportError:
|
||||
@@ -479,7 +562,7 @@ class Session(FSCollector):
|
||||
else:
|
||||
if fd is not None:
|
||||
fd.close()
|
||||
|
||||
|
||||
if type_[2] != imp.PKG_DIRECTORY:
|
||||
path = [os.path.dirname(mod)]
|
||||
else:
|
||||
@@ -502,7 +585,7 @@ class Session(FSCollector):
|
||||
raise pytest.UsageError(msg + arg)
|
||||
parts[0] = path
|
||||
return parts
|
||||
|
||||
|
||||
def matchnodes(self, matching, names):
|
||||
self.trace("matchnodes", matching, names)
|
||||
self.trace.root.indent += 1
|
||||
@@ -556,3 +639,12 @@ class Session(FSCollector):
|
||||
for x in self.genitems(subnode):
|
||||
yield x
|
||||
node.ihook.pytest_collectreport(report=rep)
|
||||
|
||||
def getfslineno(obj):
|
||||
# xxx let decorators etc specify a sane ordering
|
||||
if hasattr(obj, 'place_as'):
|
||||
obj = obj.place_as
|
||||
fslineno = py.code.getfslineno(obj)
|
||||
assert isinstance(fslineno[1], int), obj
|
||||
return fslineno
|
||||
|
||||
|
||||
112
_pytest/mark.py
112
_pytest/mark.py
@@ -14,12 +14,37 @@ def pytest_addoption(parser):
|
||||
"Terminate expression with ':' to make the first match match "
|
||||
"all subsequent tests (usually file-order). ")
|
||||
|
||||
group._addoption("-m",
|
||||
action="store", dest="markexpr", default="", metavar="MARKEXPR",
|
||||
help="only run tests matching given mark expression. "
|
||||
"example: -m 'mark1 and not mark2'."
|
||||
)
|
||||
|
||||
group.addoption("--markers", action="store_true", help=
|
||||
"show markers (builtin, plugin and per-project ones).")
|
||||
|
||||
parser.addini("markers", "markers for test functions", 'linelist')
|
||||
|
||||
def pytest_cmdline_main(config):
|
||||
if config.option.markers:
|
||||
config.pluginmanager.do_configure(config)
|
||||
tw = py.io.TerminalWriter()
|
||||
for line in config.getini("markers"):
|
||||
name, rest = line.split(":", 1)
|
||||
tw.write("@pytest.mark.%s:" % name, bold=True)
|
||||
tw.line(rest)
|
||||
tw.line()
|
||||
config.pluginmanager.do_unconfigure(config)
|
||||
return 0
|
||||
pytest_cmdline_main.tryfirst = True
|
||||
|
||||
def pytest_collection_modifyitems(items, config):
|
||||
keywordexpr = config.option.keyword
|
||||
if not keywordexpr:
|
||||
matchexpr = config.option.markexpr
|
||||
if not keywordexpr and not matchexpr:
|
||||
return
|
||||
selectuntil = False
|
||||
if keywordexpr[-1] == ":":
|
||||
if keywordexpr[-1:] == ":":
|
||||
selectuntil = True
|
||||
keywordexpr = keywordexpr[:-1]
|
||||
|
||||
@@ -29,22 +54,39 @@ def pytest_collection_modifyitems(items, config):
|
||||
if keywordexpr and skipbykeyword(colitem, keywordexpr):
|
||||
deselected.append(colitem)
|
||||
else:
|
||||
remaining.append(colitem)
|
||||
if selectuntil:
|
||||
keywordexpr = None
|
||||
if matchexpr:
|
||||
if not matchmark(colitem, matchexpr):
|
||||
deselected.append(colitem)
|
||||
continue
|
||||
remaining.append(colitem)
|
||||
|
||||
if deselected:
|
||||
config.hook.pytest_deselected(items=deselected)
|
||||
items[:] = remaining
|
||||
|
||||
class BoolDict:
|
||||
def __init__(self, mydict):
|
||||
self._mydict = mydict
|
||||
def __getitem__(self, name):
|
||||
return name in self._mydict
|
||||
|
||||
def matchmark(colitem, matchexpr):
|
||||
return eval(matchexpr, {}, BoolDict(colitem.obj.__dict__))
|
||||
|
||||
def pytest_configure(config):
|
||||
if config.option.strict:
|
||||
pytest.mark._config = config
|
||||
|
||||
def skipbykeyword(colitem, keywordexpr):
|
||||
""" return True if they given keyword expression means to
|
||||
skip this collector/item.
|
||||
"""
|
||||
if not keywordexpr:
|
||||
return
|
||||
|
||||
itemkeywords = getkeywords(colitem)
|
||||
|
||||
itemkeywords = colitem.keywords
|
||||
for key in filter(None, keywordexpr.split()):
|
||||
eor = key[:1] == '-'
|
||||
if eor:
|
||||
@@ -52,14 +94,6 @@ def skipbykeyword(colitem, keywordexpr):
|
||||
if not (eor ^ matchonekeyword(key, itemkeywords)):
|
||||
return True
|
||||
|
||||
def getkeywords(node):
|
||||
keywords = {}
|
||||
while node is not None:
|
||||
keywords.update(node.keywords)
|
||||
node = node.parent
|
||||
return keywords
|
||||
|
||||
|
||||
def matchonekeyword(key, itemkeywords):
|
||||
for elem in key.split("."):
|
||||
for kw in itemkeywords:
|
||||
@@ -77,15 +111,31 @@ class MarkGenerator:
|
||||
@py.test.mark.slowtest
|
||||
def test_function():
|
||||
pass
|
||||
|
||||
|
||||
will set a 'slowtest' :class:`MarkInfo` object
|
||||
on the ``test_function`` object. """
|
||||
|
||||
def __getattr__(self, name):
|
||||
if name[0] == "_":
|
||||
raise AttributeError(name)
|
||||
if hasattr(self, '_config'):
|
||||
self._check(name)
|
||||
return MarkDecorator(name)
|
||||
|
||||
def _check(self, name):
|
||||
try:
|
||||
if name in self._markers:
|
||||
return
|
||||
except AttributeError:
|
||||
pass
|
||||
self._markers = l = set()
|
||||
for line in self._config.getini("markers"):
|
||||
beginning = line.split(":", 1)
|
||||
x = beginning[0].split("(", 1)[0]
|
||||
l.add(x)
|
||||
if name not in self._markers:
|
||||
raise AttributeError("%r not a registered marker" % (name,))
|
||||
|
||||
class MarkDecorator:
|
||||
""" A decorator for test functions and test classes. When applied
|
||||
it will create :class:`MarkInfo` objects which may be
|
||||
@@ -133,8 +183,7 @@ class MarkDecorator:
|
||||
holder = MarkInfo(self.markname, self.args, self.kwargs)
|
||||
setattr(func, self.markname, holder)
|
||||
else:
|
||||
holder.kwargs.update(self.kwargs)
|
||||
holder.args += self.args
|
||||
holder.add(self.args, self.kwargs)
|
||||
return func
|
||||
kw = self.kwargs.copy()
|
||||
kw.update(kwargs)
|
||||
@@ -150,27 +199,20 @@ class MarkInfo:
|
||||
self.args = args
|
||||
#: keyword argument dictionary, empty if nothing specified
|
||||
self.kwargs = kwargs
|
||||
self._arglist = [(args, kwargs.copy())]
|
||||
|
||||
def __repr__(self):
|
||||
return "<MarkInfo %r args=%r kwargs=%r>" % (
|
||||
self.name, self.args, self.kwargs)
|
||||
|
||||
def pytest_itemcollected(item):
|
||||
if not isinstance(item, pytest.Function):
|
||||
return
|
||||
try:
|
||||
func = item.obj.__func__
|
||||
except AttributeError:
|
||||
func = getattr(item.obj, 'im_func', item.obj)
|
||||
pyclasses = (pytest.Class, pytest.Module)
|
||||
for node in item.listchain():
|
||||
if isinstance(node, pyclasses):
|
||||
marker = getattr(node.obj, 'pytestmark', None)
|
||||
if marker is not None:
|
||||
if isinstance(marker, list):
|
||||
for mark in marker:
|
||||
mark(func)
|
||||
else:
|
||||
marker(func)
|
||||
node = node.parent
|
||||
item.keywords.update(py.builtin._getfuncdict(func))
|
||||
def add(self, args, kwargs):
|
||||
""" add a MarkInfo with the given args and kwargs. """
|
||||
self._arglist.append((args, kwargs))
|
||||
self.args += args
|
||||
self.kwargs.update(kwargs)
|
||||
|
||||
def __iter__(self):
|
||||
""" yield MarkInfo objects each relating to a marking-call. """
|
||||
for args, kwargs in self._arglist:
|
||||
yield MarkInfo(self.name, args, kwargs)
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
""" monkeypatching and mocking functionality. """
|
||||
|
||||
import os, sys
|
||||
import os, sys, inspect
|
||||
|
||||
def pytest_funcarg__monkeypatch(request):
|
||||
"""The returned ``monkeypatch`` funcarg provides these
|
||||
@@ -13,6 +13,7 @@ def pytest_funcarg__monkeypatch(request):
|
||||
monkeypatch.setenv(name, value, prepend=False)
|
||||
monkeypatch.delenv(name, value, raising=True)
|
||||
monkeypatch.syspath_prepend(path)
|
||||
monkeypatch.chdir(path)
|
||||
|
||||
All modifications will be undone after the requesting
|
||||
test function has finished. The ``raising``
|
||||
@@ -30,6 +31,7 @@ class monkeypatch:
|
||||
def __init__(self):
|
||||
self._setattr = []
|
||||
self._setitem = []
|
||||
self._cwd = None
|
||||
|
||||
def setattr(self, obj, name, value, raising=True):
|
||||
""" set attribute ``name`` on ``obj`` to ``value``, by default
|
||||
@@ -37,6 +39,10 @@ class monkeypatch:
|
||||
oldval = getattr(obj, name, notset)
|
||||
if raising and oldval is notset:
|
||||
raise AttributeError("%r has no attribute %r" %(obj, name))
|
||||
|
||||
# avoid class descriptors like staticmethod/classmethod
|
||||
if inspect.isclass(obj):
|
||||
oldval = obj.__dict__.get(name, notset)
|
||||
self._setattr.insert(0, (obj, name, oldval))
|
||||
setattr(obj, name, value)
|
||||
|
||||
@@ -83,6 +89,17 @@ class monkeypatch:
|
||||
self._savesyspath = sys.path[:]
|
||||
sys.path.insert(0, str(path))
|
||||
|
||||
def chdir(self, path):
|
||||
""" change the current working directory to the specified path
|
||||
path can be a string or a py.path.local object
|
||||
"""
|
||||
if self._cwd is None:
|
||||
self._cwd = os.getcwd()
|
||||
if hasattr(path, "chdir"):
|
||||
path.chdir()
|
||||
else:
|
||||
os.chdir(path)
|
||||
|
||||
def undo(self):
|
||||
""" undo previous changes. This call consumes the
|
||||
undo stack. Calling it a second time has no effect unless
|
||||
@@ -95,9 +112,17 @@ class monkeypatch:
|
||||
self._setattr[:] = []
|
||||
for dictionary, name, value in self._setitem:
|
||||
if value is notset:
|
||||
del dictionary[name]
|
||||
try:
|
||||
del dictionary[name]
|
||||
except KeyError:
|
||||
pass # was already deleted, so we have the desired state
|
||||
else:
|
||||
dictionary[name] = value
|
||||
self._setitem[:] = []
|
||||
if hasattr(self, '_savesyspath'):
|
||||
sys.path[:] = self._savesyspath
|
||||
del self._savesyspath
|
||||
|
||||
if self._cwd is not None:
|
||||
os.chdir(self._cwd)
|
||||
self._cwd = None
|
||||
|
||||
@@ -41,7 +41,7 @@ def pytest_make_collect_report(collector):
|
||||
|
||||
def call_optional(obj, name):
|
||||
method = getattr(obj, name, None)
|
||||
if method:
|
||||
if method is not None and not hasattr(method, "_pytestfixturefunction"):
|
||||
# If there's any problems allow the exception to raise rather than
|
||||
# silently ignoring them
|
||||
method()
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
import py, sys
|
||||
|
||||
class url:
|
||||
base = "http://paste.pocoo.org"
|
||||
base = "http://bpaste.net"
|
||||
xmlrpc = base + "/xmlrpc/"
|
||||
show = base + "/show/"
|
||||
|
||||
@@ -11,7 +11,7 @@ def pytest_addoption(parser):
|
||||
group._addoption('--pastebin', metavar="mode",
|
||||
action='store', dest="pastebin", default=None,
|
||||
type="choice", choices=['failed', 'all'],
|
||||
help="send failed|all info to Pocoo pastebin service.")
|
||||
help="send failed|all info to bpaste.net pastebin service.")
|
||||
|
||||
def pytest_configure(__multicall__, config):
|
||||
import tempfile
|
||||
@@ -38,7 +38,11 @@ def pytest_unconfigure(config):
|
||||
del tr._tw.__dict__['write']
|
||||
|
||||
def getproxy():
|
||||
return py.std.xmlrpclib.ServerProxy(url.xmlrpc).pastes
|
||||
if sys.version_info < (3, 0):
|
||||
from xmlrpclib import ServerProxy
|
||||
else:
|
||||
from xmlrpc.client import ServerProxy
|
||||
return ServerProxy(url.xmlrpc).pastes
|
||||
|
||||
def pytest_terminal_summary(terminalreporter):
|
||||
if terminalreporter.config.option.pastebin != "failed":
|
||||
|
||||
@@ -50,7 +50,7 @@ def pytest_make_collect_report(__multicall__, collector):
|
||||
|
||||
def pytest_runtest_makereport():
|
||||
pytestPDB.item = None
|
||||
|
||||
|
||||
class PdbInvoke:
|
||||
@pytest.mark.tryfirst
|
||||
def pytest_runtest_makereport(self, item, call, __multicall__):
|
||||
@@ -59,7 +59,7 @@ class PdbInvoke:
|
||||
call.excinfo.errisinstance(pytest.skip.Exception) or \
|
||||
call.excinfo.errisinstance(py.std.bdb.BdbQuit):
|
||||
return rep
|
||||
if "xfail" in rep.keywords:
|
||||
if hasattr(rep, "wasxfail"):
|
||||
return rep
|
||||
# we assume that the above execute() suspended capturing
|
||||
# XXX we re-use the TerminalReporter's terminalwriter
|
||||
@@ -70,7 +70,13 @@ class PdbInvoke:
|
||||
tw.sep(">", "traceback")
|
||||
rep.toterminal(tw)
|
||||
tw.sep(">", "entering PDB")
|
||||
post_mortem(call.excinfo._excinfo[2])
|
||||
# A doctest.UnexpectedException is not useful for post_mortem.
|
||||
# Use the underlying exception instead:
|
||||
if isinstance(call.excinfo.value, py.std.doctest.UnexpectedException):
|
||||
tb = call.excinfo.value.exc_info[2]
|
||||
else:
|
||||
tb = call.excinfo._excinfo[2]
|
||||
post_mortem(tb)
|
||||
rep._pdbshown = True
|
||||
return rep
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
import py, pytest
|
||||
import sys, os
|
||||
import codecs
|
||||
import re
|
||||
import inspect
|
||||
import time
|
||||
@@ -244,8 +245,10 @@ class TmpTestdir:
|
||||
ret = None
|
||||
for name, value in items:
|
||||
p = self.tmpdir.join(name).new(ext=ext)
|
||||
source = py.builtin._totext(py.code.Source(value)).lstrip()
|
||||
p.write(source.encode("utf-8"), "wb")
|
||||
source = py.builtin._totext(py.code.Source(value)).strip()
|
||||
content = source.encode("utf-8") # + "\n"
|
||||
#content = content.rstrip() + "\n"
|
||||
p.write(content, "wb")
|
||||
if ret is None:
|
||||
ret = p
|
||||
return ret
|
||||
@@ -314,21 +317,11 @@ class TmpTestdir:
|
||||
result.extend(session.genitems(colitem))
|
||||
return result
|
||||
|
||||
def inline_genitems(self, *args):
|
||||
#config = self.parseconfig(*args)
|
||||
config = self.parseconfigure(*args)
|
||||
rec = self.getreportrecorder(config)
|
||||
session = Session(config)
|
||||
config.hook.pytest_sessionstart(session=session)
|
||||
session.perform_collect()
|
||||
config.hook.pytest_sessionfinish(session=session, exitstatus=EXIT_OK)
|
||||
return session.items, rec
|
||||
|
||||
def runitem(self, source):
|
||||
# used from runner functional tests
|
||||
item = self.getitem(source)
|
||||
# the test class where we are called from wants to provide the runner
|
||||
testclassinstance = py.builtin._getimself(self.request.function)
|
||||
testclassinstance = self.request.instance
|
||||
runner = testclassinstance.getrunner()
|
||||
return runner(item)
|
||||
|
||||
@@ -344,71 +337,66 @@ class TmpTestdir:
|
||||
l = list(args) + [p]
|
||||
reprec = self.inline_run(*l)
|
||||
reports = reprec.getreports("pytest_runtest_logreport")
|
||||
assert len(reports) == 1, reports
|
||||
return reports[0]
|
||||
assert len(reports) == 3, reports # setup/call/teardown
|
||||
return reports[1]
|
||||
|
||||
def inline_genitems(self, *args):
|
||||
return self.inprocess_run(list(args) + ['--collectonly'])
|
||||
|
||||
def inline_run(self, *args):
|
||||
args = ("-s", ) + args # otherwise FD leakage
|
||||
config = self.parseconfig(*args)
|
||||
reprec = self.getreportrecorder(config)
|
||||
#config.pluginmanager.do_configure(config)
|
||||
config.hook.pytest_cmdline_main(config=config)
|
||||
#config.pluginmanager.do_unconfigure(config)
|
||||
return reprec
|
||||
items, rec = self.inprocess_run(args)
|
||||
return rec
|
||||
|
||||
def config_preparse(self):
|
||||
config = self.Config()
|
||||
for plugin in self.plugins:
|
||||
if isinstance(plugin, str):
|
||||
config.pluginmanager.import_plugin(plugin)
|
||||
else:
|
||||
if isinstance(plugin, dict):
|
||||
plugin = PseudoPlugin(plugin)
|
||||
if not config.pluginmanager.isregistered(plugin):
|
||||
config.pluginmanager.register(plugin)
|
||||
return config
|
||||
def inprocess_run(self, args, plugins=None):
|
||||
rec = []
|
||||
items = []
|
||||
class Collect:
|
||||
def pytest_configure(x, config):
|
||||
rec.append(self.getreportrecorder(config))
|
||||
def pytest_itemcollected(self, item):
|
||||
items.append(item)
|
||||
if not plugins:
|
||||
plugins = []
|
||||
plugins.append(Collect())
|
||||
ret = self.pytestmain(list(args), plugins=plugins)
|
||||
reprec = rec[0]
|
||||
reprec.ret = ret
|
||||
assert len(rec) == 1
|
||||
return items, reprec
|
||||
|
||||
def parseconfig(self, *args):
|
||||
if not args:
|
||||
args = (self.tmpdir,)
|
||||
config = self.config_preparse()
|
||||
args = list(args)
|
||||
args = [str(x) for x in args]
|
||||
for x in args:
|
||||
if str(x).startswith('--basetemp'):
|
||||
break
|
||||
else:
|
||||
args.append("--basetemp=%s" % self.tmpdir.dirpath('basetemp'))
|
||||
config.parse(args)
|
||||
import _pytest.core
|
||||
config = _pytest.core._prepareconfig(args, self.plugins)
|
||||
# the in-process pytest invocation needs to avoid leaking FDs
|
||||
# so we register a "reset_capturings" callmon the capturing manager
|
||||
# and make sure it gets called
|
||||
config._cleanup.append(
|
||||
config.pluginmanager.getplugin("capturemanager").reset_capturings)
|
||||
import _pytest.config
|
||||
self.request.addfinalizer(
|
||||
lambda: _pytest.config.pytest_unconfigure(config))
|
||||
return config
|
||||
|
||||
def reparseconfig(self, args=None):
|
||||
""" this is used from tests that want to re-invoke parse(). """
|
||||
if not args:
|
||||
args = [self.tmpdir]
|
||||
oldconfig = getattr(py.test, 'config', None)
|
||||
try:
|
||||
c = py.test.config = self.Config()
|
||||
c.basetemp = py.path.local.make_numbered_dir(prefix="reparse",
|
||||
keep=0, rootdir=self.tmpdir, lock_timeout=None)
|
||||
c.parse(args)
|
||||
c.pluginmanager.do_configure(c)
|
||||
self.request.addfinalizer(lambda: c.pluginmanager.do_unconfigure(c))
|
||||
return c
|
||||
finally:
|
||||
py.test.config = oldconfig
|
||||
|
||||
def parseconfigure(self, *args):
|
||||
config = self.parseconfig(*args)
|
||||
config.pluginmanager.do_configure(config)
|
||||
self.request.addfinalizer(lambda:
|
||||
config.pluginmanager.do_unconfigure(config))
|
||||
config.pluginmanager.do_unconfigure(config))
|
||||
return config
|
||||
|
||||
def getitem(self, source, funcname="test_func"):
|
||||
for item in self.getitems(source):
|
||||
items = self.getitems(source)
|
||||
for item in items:
|
||||
if item.name == funcname:
|
||||
return item
|
||||
assert 0, "%r item not found in module:\n%s" %(funcname, source)
|
||||
assert 0, "%r item not found in module:\n%s\nitems: %s" %(
|
||||
funcname, source, items)
|
||||
|
||||
def getitems(self, source):
|
||||
modcol = self.getmodulecol(source)
|
||||
@@ -421,7 +409,6 @@ class TmpTestdir:
|
||||
self.makepyfile(__init__ = "#")
|
||||
self.config = config = self.parseconfigure(path, *configargs)
|
||||
node = self.getnode(config, path)
|
||||
#config.pluginmanager.do_unconfigure(config)
|
||||
return node
|
||||
|
||||
def collect_by_name(self, modcol, name):
|
||||
@@ -438,9 +425,16 @@ class TmpTestdir:
|
||||
return py.std.subprocess.Popen(cmdargs, stdout=stdout, stderr=stderr, **kw)
|
||||
|
||||
def pytestmain(self, *args, **kwargs):
|
||||
ret = pytest.main(*args, **kwargs)
|
||||
if ret == 2:
|
||||
raise KeyboardInterrupt()
|
||||
class ResetCapturing:
|
||||
@pytest.mark.trylast
|
||||
def pytest_unconfigure(self, config):
|
||||
capman = config.pluginmanager.getplugin("capturemanager")
|
||||
capman.reset_capturings()
|
||||
plugins = kwargs.setdefault("plugins", [])
|
||||
rc = ResetCapturing()
|
||||
plugins.append(rc)
|
||||
return pytest.main(*args, **kwargs)
|
||||
|
||||
def run(self, *cmdargs):
|
||||
return self._run(*cmdargs)
|
||||
|
||||
@@ -449,28 +443,35 @@ class TmpTestdir:
|
||||
p1 = self.tmpdir.join("stdout")
|
||||
p2 = self.tmpdir.join("stderr")
|
||||
print_("running", cmdargs, "curdir=", py.path.local())
|
||||
f1 = p1.open("wb")
|
||||
f2 = p2.open("wb")
|
||||
now = time.time()
|
||||
popen = self.popen(cmdargs, stdout=f1, stderr=f2,
|
||||
close_fds=(sys.platform != "win32"))
|
||||
ret = popen.wait()
|
||||
f1.close()
|
||||
f2.close()
|
||||
out = p1.read("rb")
|
||||
out = getdecoded(out).splitlines()
|
||||
err = p2.read("rb")
|
||||
err = getdecoded(err).splitlines()
|
||||
def dump_lines(lines, fp):
|
||||
try:
|
||||
for line in lines:
|
||||
py.builtin.print_(line, file=fp)
|
||||
except UnicodeEncodeError:
|
||||
print("couldn't print to %s because of encoding" % (fp,))
|
||||
dump_lines(out, sys.stdout)
|
||||
dump_lines(err, sys.stderr)
|
||||
f1 = codecs.open(str(p1), "w", encoding="utf8")
|
||||
f2 = codecs.open(str(p2), "w", encoding="utf8")
|
||||
try:
|
||||
now = time.time()
|
||||
popen = self.popen(cmdargs, stdout=f1, stderr=f2,
|
||||
close_fds=(sys.platform != "win32"))
|
||||
ret = popen.wait()
|
||||
finally:
|
||||
f1.close()
|
||||
f2.close()
|
||||
f1 = codecs.open(str(p1), "r", encoding="utf8")
|
||||
f2 = codecs.open(str(p2), "r", encoding="utf8")
|
||||
try:
|
||||
out = f1.read().splitlines()
|
||||
err = f2.read().splitlines()
|
||||
finally:
|
||||
f1.close()
|
||||
f2.close()
|
||||
self._dump_lines(out, sys.stdout)
|
||||
self._dump_lines(err, sys.stderr)
|
||||
return RunResult(ret, out, err, time.time()-now)
|
||||
|
||||
def _dump_lines(self, lines, fp):
|
||||
try:
|
||||
for line in lines:
|
||||
py.builtin.print_(line, file=fp)
|
||||
except UnicodeEncodeError:
|
||||
print("couldn't print to %s because of encoding" % (fp,))
|
||||
|
||||
def runpybin(self, scriptname, *args):
|
||||
fullargs = self._getpybinargs(scriptname) + args
|
||||
return self.run(*fullargs)
|
||||
@@ -529,6 +530,8 @@ class TmpTestdir:
|
||||
pexpect = py.test.importorskip("pexpect", "2.4")
|
||||
if hasattr(sys, 'pypy_version_info') and '64' in py.std.platform.machine():
|
||||
pytest.skip("pypy-64 bit not supported")
|
||||
if sys.platform == "darwin":
|
||||
pytest.xfail("pexpect does not work reliably on darwin?!")
|
||||
logfile = self.tmpdir.join("spawn.out")
|
||||
child = pexpect.spawn(cmd, logfile=logfile.open("w"))
|
||||
child.timeout = expect_timeout
|
||||
@@ -541,10 +544,6 @@ def getdecoded(out):
|
||||
return "INTERNAL not-utf8-decodeable, truncated string:\n%s" % (
|
||||
py.io.saferepr(out),)
|
||||
|
||||
class PseudoPlugin:
|
||||
def __init__(self, vars):
|
||||
self.__dict__.update(vars)
|
||||
|
||||
class ReportRecorder(object):
|
||||
def __init__(self, hook):
|
||||
self.hook = hook
|
||||
@@ -566,10 +565,17 @@ class ReportRecorder(object):
|
||||
def getreports(self, names="pytest_runtest_logreport pytest_collectreport"):
|
||||
return [x.report for x in self.getcalls(names)]
|
||||
|
||||
def matchreport(self, inamepart="", names="pytest_runtest_logreport pytest_collectreport", when=None):
|
||||
def matchreport(self, inamepart="",
|
||||
names="pytest_runtest_logreport pytest_collectreport", when=None):
|
||||
""" return a testreport whose dotted import path matches """
|
||||
l = []
|
||||
for rep in self.getreports(names=names):
|
||||
try:
|
||||
if not when and rep.when != "call" and rep.passed:
|
||||
# setup/teardown passing reports - let's ignore those
|
||||
continue
|
||||
except AttributeError:
|
||||
pass
|
||||
if when and getattr(rep, 'when', None) != when:
|
||||
continue
|
||||
if not inamepart or inamepart in rep.nodeid.split("::"):
|
||||
|
||||
1659
_pytest/python.py
1659
_pytest/python.py
File diff suppressed because it is too large
Load Diff
@@ -1,4 +1,6 @@
|
||||
""" (disabled by default) create result information in a plain text file. """
|
||||
""" log machine-parseable test session result information in a plain
|
||||
text file.
|
||||
"""
|
||||
|
||||
import py
|
||||
|
||||
@@ -63,6 +65,8 @@ class ResultLog(object):
|
||||
self.write_log_entry(testpath, lettercode, longrepr)
|
||||
|
||||
def pytest_runtest_logreport(self, report):
|
||||
if report.when != "call" and report.passed:
|
||||
return
|
||||
res = self.config.hook.pytest_report_teststatus(report=report)
|
||||
code = res[1]
|
||||
if code == 'x':
|
||||
@@ -89,5 +93,8 @@ class ResultLog(object):
|
||||
self.log_outcome(report, code, longrepr)
|
||||
|
||||
def pytest_internalerror(self, excrepr):
|
||||
path = excrepr.reprcrash.path
|
||||
reprcrash = getattr(excrepr, 'reprcrash', None)
|
||||
path = getattr(reprcrash, "path", None)
|
||||
if path is None:
|
||||
path = "cwd:%s" % py.path.local()
|
||||
self.write_log_entry(path, '!', str(excrepr))
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
""" basic collect and runtest protocol implementations """
|
||||
|
||||
import py, sys, time
|
||||
import py, sys
|
||||
from time import time
|
||||
from py._code.code import TerminalRepr
|
||||
|
||||
def pytest_namespace():
|
||||
@@ -14,33 +15,60 @@ def pytest_namespace():
|
||||
#
|
||||
# pytest plugin hooks
|
||||
|
||||
def pytest_addoption(parser):
|
||||
group = parser.getgroup("terminal reporting", "reporting", after="general")
|
||||
group.addoption('--durations',
|
||||
action="store", type="int", default=None, metavar="N",
|
||||
help="show N slowest setup/test durations (N=0 for all)."),
|
||||
|
||||
def pytest_terminal_summary(terminalreporter):
|
||||
durations = terminalreporter.config.option.durations
|
||||
if durations is None:
|
||||
return
|
||||
tr = terminalreporter
|
||||
dlist = []
|
||||
for replist in tr.stats.values():
|
||||
for rep in replist:
|
||||
if hasattr(rep, 'duration'):
|
||||
dlist.append(rep)
|
||||
if not dlist:
|
||||
return
|
||||
dlist.sort(key=lambda x: x.duration)
|
||||
dlist.reverse()
|
||||
if not durations:
|
||||
tr.write_sep("=", "slowest test durations")
|
||||
else:
|
||||
tr.write_sep("=", "slowest %s test durations" % durations)
|
||||
dlist = dlist[:durations]
|
||||
|
||||
for rep in dlist:
|
||||
nodeid = rep.nodeid.replace("::()::", "::")
|
||||
tr.write_line("%02.2fs %-8s %s" %
|
||||
(rep.duration, rep.when, nodeid))
|
||||
|
||||
def pytest_sessionstart(session):
|
||||
session._setupstate = SetupState()
|
||||
|
||||
def pytest_sessionfinish(session, exitstatus):
|
||||
hook = session.config.hook
|
||||
rep = hook.pytest__teardown_final(session=session)
|
||||
if rep:
|
||||
hook.pytest__teardown_final_logerror(session=session, report=rep)
|
||||
session.exitstatus = 1
|
||||
def pytest_sessionfinish(session):
|
||||
session._setupstate.teardown_all()
|
||||
|
||||
class NodeInfo:
|
||||
def __init__(self, location):
|
||||
self.location = location
|
||||
|
||||
def pytest_runtest_protocol(item):
|
||||
def pytest_runtest_protocol(item, nextitem):
|
||||
item.ihook.pytest_runtest_logstart(
|
||||
nodeid=item.nodeid, location=item.location,
|
||||
)
|
||||
runtestprotocol(item)
|
||||
runtestprotocol(item, nextitem=nextitem)
|
||||
return True
|
||||
|
||||
def runtestprotocol(item, log=True):
|
||||
def runtestprotocol(item, log=True, nextitem=None):
|
||||
rep = call_and_report(item, "setup", log)
|
||||
reports = [rep]
|
||||
if rep.passed:
|
||||
reports.append(call_and_report(item, "call", log))
|
||||
reports.append(call_and_report(item, "teardown", log))
|
||||
reports.append(call_and_report(item, "teardown", log,
|
||||
nextitem=nextitem))
|
||||
return reports
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
@@ -49,16 +77,8 @@ def pytest_runtest_setup(item):
|
||||
def pytest_runtest_call(item):
|
||||
item.runtest()
|
||||
|
||||
def pytest_runtest_teardown(item):
|
||||
item.session._setupstate.teardown_exact(item)
|
||||
|
||||
def pytest__teardown_final(session):
|
||||
call = CallInfo(session._setupstate.teardown_all, when="teardown")
|
||||
if call.excinfo:
|
||||
ntraceback = call.excinfo.traceback .cut(excludepath=py._pydir)
|
||||
call.excinfo.traceback = ntraceback.filter()
|
||||
longrepr = call.excinfo.getrepr(funcargs=True)
|
||||
return TeardownErrorReport(longrepr)
|
||||
def pytest_runtest_teardown(item, nextitem):
|
||||
item.session._setupstate.teardown_exact(item, nextitem)
|
||||
|
||||
def pytest_report_teststatus(report):
|
||||
if report.when in ("setup", "teardown"):
|
||||
@@ -74,18 +94,18 @@ def pytest_report_teststatus(report):
|
||||
#
|
||||
# Implementation
|
||||
|
||||
def call_and_report(item, when, log=True):
|
||||
call = call_runtest_hook(item, when)
|
||||
def call_and_report(item, when, log=True, **kwds):
|
||||
call = call_runtest_hook(item, when, **kwds)
|
||||
hook = item.ihook
|
||||
report = hook.pytest_runtest_makereport(item=item, call=call)
|
||||
if log and (when == "call" or not report.passed):
|
||||
if log:
|
||||
hook.pytest_runtest_logreport(report=report)
|
||||
return report
|
||||
|
||||
def call_runtest_hook(item, when):
|
||||
def call_runtest_hook(item, when, **kwds):
|
||||
hookname = "pytest_runtest_" + when
|
||||
ihook = getattr(item.ihook, hookname)
|
||||
return CallInfo(lambda: ihook(item=item), when=when)
|
||||
return CallInfo(lambda: ihook(item=item, **kwds), when=when)
|
||||
|
||||
class CallInfo:
|
||||
""" Result/Exception info a function invocation. """
|
||||
@@ -95,7 +115,7 @@ class CallInfo:
|
||||
#: context of invocation: one of "setup", "call",
|
||||
#: "teardown", "memocollect"
|
||||
self.when = when
|
||||
self.start = time.time()
|
||||
self.start = time()
|
||||
try:
|
||||
try:
|
||||
self.result = func()
|
||||
@@ -104,7 +124,7 @@ class CallInfo:
|
||||
except:
|
||||
self.excinfo = py.code.ExceptionInfo()
|
||||
finally:
|
||||
self.stop = time.time()
|
||||
self.stop = time()
|
||||
|
||||
def __repr__(self):
|
||||
if self.excinfo:
|
||||
@@ -124,6 +144,10 @@ def getslaveinfoline(node):
|
||||
return s
|
||||
|
||||
class BaseReport(object):
|
||||
|
||||
def __init__(self, **kw):
|
||||
self.__dict__.update(kw)
|
||||
|
||||
def toterminal(self, out):
|
||||
longrepr = self.longrepr
|
||||
if hasattr(self, 'node'):
|
||||
@@ -131,7 +155,10 @@ class BaseReport(object):
|
||||
if hasattr(longrepr, 'toterminal'):
|
||||
longrepr.toterminal(out)
|
||||
else:
|
||||
out.line(str(longrepr))
|
||||
try:
|
||||
out.line(longrepr)
|
||||
except UnicodeEncodeError:
|
||||
out.line("<unprintable longrepr>")
|
||||
|
||||
passed = property(lambda x: x.outcome == "passed")
|
||||
failed = property(lambda x: x.outcome == "failed")
|
||||
@@ -173,7 +200,7 @@ class TestReport(BaseReport):
|
||||
they fail).
|
||||
"""
|
||||
def __init__(self, nodeid, location,
|
||||
keywords, outcome, longrepr, when, sections=(), duration=0):
|
||||
keywords, outcome, longrepr, when, sections=(), duration=0, **extra):
|
||||
#: normalized collection node id
|
||||
self.nodeid = nodeid
|
||||
|
||||
@@ -185,13 +212,13 @@ class TestReport(BaseReport):
|
||||
#: a name -> value dictionary containing all keywords and
|
||||
#: markers associated with a test invocation.
|
||||
self.keywords = keywords
|
||||
|
||||
|
||||
#: test outcome, always one of "passed", "failed", "skipped".
|
||||
self.outcome = outcome
|
||||
|
||||
#: None or a failure representation.
|
||||
self.longrepr = longrepr
|
||||
|
||||
|
||||
#: one of 'setup', 'call', 'teardown' to indicate runtest phase.
|
||||
self.when = when
|
||||
|
||||
@@ -202,6 +229,8 @@ class TestReport(BaseReport):
|
||||
#: time it took to run just the test
|
||||
self.duration = duration
|
||||
|
||||
self.__dict__.update(extra)
|
||||
|
||||
def __repr__(self):
|
||||
return "<TestReport %r when=%r outcome=%r>" % (
|
||||
self.nodeid, self.when, self.outcome)
|
||||
@@ -209,9 +238,10 @@ class TestReport(BaseReport):
|
||||
class TeardownErrorReport(BaseReport):
|
||||
outcome = "failed"
|
||||
when = "teardown"
|
||||
def __init__(self, longrepr):
|
||||
def __init__(self, longrepr, **extra):
|
||||
self.longrepr = longrepr
|
||||
self.sections = []
|
||||
self.__dict__.update(extra)
|
||||
|
||||
def pytest_make_collect_report(collector):
|
||||
call = CallInfo(collector._memocollect, "memocollect")
|
||||
@@ -233,12 +263,13 @@ def pytest_make_collect_report(collector):
|
||||
getattr(call, 'result', None))
|
||||
|
||||
class CollectReport(BaseReport):
|
||||
def __init__(self, nodeid, outcome, longrepr, result, sections=()):
|
||||
def __init__(self, nodeid, outcome, longrepr, result, sections=(), **extra):
|
||||
self.nodeid = nodeid
|
||||
self.outcome = outcome
|
||||
self.longrepr = longrepr
|
||||
self.result = result or []
|
||||
self.sections = list(sections)
|
||||
self.__dict__.update(extra)
|
||||
|
||||
@property
|
||||
def location(self):
|
||||
@@ -252,7 +283,7 @@ class CollectErrorRepr(TerminalRepr):
|
||||
def __init__(self, msg):
|
||||
self.longrepr = msg
|
||||
def toterminal(self, out):
|
||||
out.line(str(self.longrepr), red=True)
|
||||
out.line(self.longrepr, red=True)
|
||||
|
||||
class SetupState(object):
|
||||
""" shared state for setting up/tearing down test items or collectors. """
|
||||
@@ -264,6 +295,8 @@ class SetupState(object):
|
||||
""" attach a finalizer to the given colitem.
|
||||
if colitem is None, this will add a finalizer that
|
||||
is called at the end of teardown_all().
|
||||
if colitem is a tuple, it will be used as a key
|
||||
and needs an explicit call to _callfinalizers(key) later on.
|
||||
"""
|
||||
assert hasattr(finalizer, '__call__')
|
||||
#assert colitem in self.stack
|
||||
@@ -281,31 +314,35 @@ class SetupState(object):
|
||||
|
||||
def _teardown_with_finalization(self, colitem):
|
||||
self._callfinalizers(colitem)
|
||||
if colitem:
|
||||
if hasattr(colitem, "teardown"):
|
||||
colitem.teardown()
|
||||
for colitem in self._finalizers:
|
||||
assert colitem is None or colitem in self.stack
|
||||
assert colitem is None or colitem in self.stack \
|
||||
or isinstance(colitem, tuple)
|
||||
|
||||
def teardown_all(self):
|
||||
while self.stack:
|
||||
self._pop_and_teardown()
|
||||
self._teardown_with_finalization(None)
|
||||
for key in list(self._finalizers):
|
||||
self._teardown_with_finalization(key)
|
||||
assert not self._finalizers
|
||||
|
||||
def teardown_exact(self, item):
|
||||
if self.stack and item == self.stack[-1]:
|
||||
def teardown_exact(self, item, nextitem):
|
||||
needed_collectors = nextitem and nextitem.listchain() or []
|
||||
self._teardown_towards(needed_collectors)
|
||||
|
||||
def _teardown_towards(self, needed_collectors):
|
||||
while self.stack:
|
||||
if self.stack == needed_collectors[:len(self.stack)]:
|
||||
break
|
||||
self._pop_and_teardown()
|
||||
else:
|
||||
self._callfinalizers(item)
|
||||
|
||||
def prepare(self, colitem):
|
||||
""" setup objects along the collector chain to the test-method
|
||||
and teardown previously setup objects."""
|
||||
needed_collectors = colitem.listchain()
|
||||
while self.stack:
|
||||
if self.stack == needed_collectors[:len(self.stack)]:
|
||||
break
|
||||
self._pop_and_teardown()
|
||||
self._teardown_towards(needed_collectors)
|
||||
|
||||
# check if the last collection node has raised an error
|
||||
for col in self.stack:
|
||||
if hasattr(col, '_prepare_exc'):
|
||||
@@ -372,7 +409,9 @@ skip.Exception = Skipped
|
||||
|
||||
def fail(msg="", pytrace=True):
|
||||
""" explicitely fail an currently-executing test with the given Message.
|
||||
if @pytrace is not True the msg represents the full failure information.
|
||||
|
||||
:arg pytrace: if false the msg represents the full failure information
|
||||
and no python traceback will be reported.
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
raise Failed(msg=msg, pytrace=pytrace)
|
||||
@@ -387,9 +426,10 @@ def importorskip(modname, minversion=None):
|
||||
__tracebackhide__ = True
|
||||
compile(modname, '', 'eval') # to catch syntaxerrors
|
||||
try:
|
||||
mod = __import__(modname, None, None, ['__doc__'])
|
||||
__import__(modname)
|
||||
except ImportError:
|
||||
py.test.skip("could not import %r" %(modname,))
|
||||
mod = sys.modules[modname]
|
||||
if minversion is None:
|
||||
return mod
|
||||
verattr = getattr(mod, '__version__', None)
|
||||
|
||||
@@ -9,6 +9,21 @@ def pytest_addoption(parser):
|
||||
action="store_true", dest="runxfail", default=False,
|
||||
help="run tests even if they are marked xfail")
|
||||
|
||||
def pytest_configure(config):
|
||||
config.addinivalue_line("markers",
|
||||
"skipif(*conditions): skip the given test function if evaluation "
|
||||
"of all conditions has a True value. Evaluation happens within the "
|
||||
"module global context. Example: skipif('sys.platform == \"win32\"') "
|
||||
"skips the test if we are on the win32 platform. "
|
||||
)
|
||||
config.addinivalue_line("markers",
|
||||
"xfail(*conditions, reason=None, run=True): mark the the test function "
|
||||
"as an expected failure. Optionally specify a reason and run=False "
|
||||
"if you don't even want to execute the test function. Any positional "
|
||||
"condition strings will be evaluated (like with skipif) and if one is "
|
||||
"False the marker will not be applied."
|
||||
)
|
||||
|
||||
def pytest_namespace():
|
||||
return dict(xfail=xfail)
|
||||
|
||||
@@ -95,6 +110,7 @@ class MarkEvaluator:
|
||||
return expl
|
||||
|
||||
|
||||
@pytest.mark.tryfirst
|
||||
def pytest_runtest_setup(item):
|
||||
if not isinstance(item, pytest.Function):
|
||||
return
|
||||
@@ -117,6 +133,14 @@ def check_xfail_no_run(item):
|
||||
def pytest_runtest_makereport(__multicall__, item, call):
|
||||
if not isinstance(item, pytest.Function):
|
||||
return
|
||||
# unitttest special case, see setting of _unexpectedsuccess
|
||||
if hasattr(item, '_unexpectedsuccess'):
|
||||
rep = __multicall__.execute()
|
||||
if rep.when == "call":
|
||||
# we need to translate into how py.test encodes xpass
|
||||
rep.wasxfail = "reason: " + repr(item._unexpectedsuccess)
|
||||
rep.outcome = "failed"
|
||||
return rep
|
||||
if not (call.excinfo and
|
||||
call.excinfo.errisinstance(py.test.xfail.Exception)):
|
||||
evalxfail = getattr(item, '_evalxfail', None)
|
||||
@@ -125,27 +149,27 @@ def pytest_runtest_makereport(__multicall__, item, call):
|
||||
if call.excinfo and call.excinfo.errisinstance(py.test.xfail.Exception):
|
||||
if not item.config.getvalue("runxfail"):
|
||||
rep = __multicall__.execute()
|
||||
rep.keywords['xfail'] = "reason: " + call.excinfo.value.msg
|
||||
rep.wasxfail = "reason: " + call.excinfo.value.msg
|
||||
rep.outcome = "skipped"
|
||||
return rep
|
||||
rep = __multicall__.execute()
|
||||
evalxfail = item._evalxfail
|
||||
if not item.config.option.runxfail:
|
||||
if evalxfail.wasvalid() and evalxfail.istrue():
|
||||
if call.excinfo:
|
||||
rep.outcome = "skipped"
|
||||
rep.keywords['xfail'] = evalxfail.getexplanation()
|
||||
elif call.when == "call":
|
||||
rep.outcome = "failed"
|
||||
rep.keywords['xfail'] = evalxfail.getexplanation()
|
||||
return rep
|
||||
if 'xfail' in rep.keywords:
|
||||
del rep.keywords['xfail']
|
||||
if not rep.skipped:
|
||||
if not item.config.option.runxfail:
|
||||
if evalxfail.wasvalid() and evalxfail.istrue():
|
||||
if call.excinfo:
|
||||
rep.outcome = "skipped"
|
||||
elif call.when == "call":
|
||||
rep.outcome = "failed"
|
||||
else:
|
||||
return rep
|
||||
rep.wasxfail = evalxfail.getexplanation()
|
||||
return rep
|
||||
return rep
|
||||
|
||||
# called by terminalreporter progress reporting
|
||||
def pytest_report_teststatus(report):
|
||||
if 'xfail' in report.keywords:
|
||||
if hasattr(report, "wasxfail"):
|
||||
if report.skipped:
|
||||
return "xfailed", "x", "xfail"
|
||||
elif report.failed:
|
||||
@@ -192,7 +216,7 @@ def show_xfailed(terminalreporter, lines):
|
||||
if xfailed:
|
||||
for rep in xfailed:
|
||||
pos = rep.nodeid
|
||||
reason = rep.keywords['xfail']
|
||||
reason = rep.wasxfail
|
||||
lines.append("XFAIL %s" % (pos,))
|
||||
if reason:
|
||||
lines.append(" " + str(reason))
|
||||
@@ -202,7 +226,7 @@ def show_xpassed(terminalreporter, lines):
|
||||
if xpassed:
|
||||
for rep in xpassed:
|
||||
pos = rep.nodeid
|
||||
reason = rep.keywords['xfail']
|
||||
reason = rep.wasxfail
|
||||
lines.append("XPASS %s %s" %(pos, reason))
|
||||
|
||||
def cached_eval(config, expr, d):
|
||||
|
||||
@@ -2,7 +2,8 @@
|
||||
|
||||
This is a good source for looking at the various reporting hooks.
|
||||
"""
|
||||
import pytest, py
|
||||
import pytest
|
||||
import py
|
||||
import sys
|
||||
import os
|
||||
|
||||
@@ -11,7 +12,7 @@ def pytest_addoption(parser):
|
||||
group._addoption('-v', '--verbose', action="count",
|
||||
dest="verbose", default=0, help="increase verbosity."),
|
||||
group._addoption('-q', '--quiet', action="count",
|
||||
dest="quiet", default=0, help="decreate verbosity."),
|
||||
dest="quiet", default=0, help="decrease verbosity."),
|
||||
group._addoption('-r',
|
||||
action="store", dest="reportchars", default=None, metavar="chars",
|
||||
help="show extra test summary info as specified by chars (f)ailed, "
|
||||
@@ -37,13 +38,15 @@ def pytest_configure(config):
|
||||
stdout = py.std.sys.stdout
|
||||
if hasattr(os, 'dup') and hasattr(stdout, 'fileno'):
|
||||
try:
|
||||
newfd = os.dup(stdout.fileno())
|
||||
#print "got newfd", newfd
|
||||
newstdout = py.io.dupfile(stdout, buffering=1,
|
||||
encoding=stdout.encoding)
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
stdout = os.fdopen(newfd, stdout.mode, 1)
|
||||
config._toclose = stdout
|
||||
config._cleanup.append(lambda: newstdout.close())
|
||||
assert stdout.encoding == newstdout.encoding
|
||||
stdout = newstdout
|
||||
|
||||
reporter = TerminalReporter(config, stdout)
|
||||
config.pluginmanager.register(reporter, 'terminalreporter')
|
||||
if config.option.debug or config.option.traceconfig:
|
||||
@@ -52,11 +55,6 @@ def pytest_configure(config):
|
||||
reporter.write_line("[traceconfig] " + msg)
|
||||
config.trace.root.setprocessor("pytest:config", mywriter)
|
||||
|
||||
def pytest_unconfigure(config):
|
||||
if hasattr(config, '_toclose'):
|
||||
#print "closing", config._toclose, config._toclose.fileno()
|
||||
config._toclose.close()
|
||||
|
||||
def getreportopt(config):
|
||||
reportopts = ""
|
||||
optvalue = config.option.report
|
||||
@@ -98,7 +96,7 @@ class TerminalReporter:
|
||||
self._numcollected = 0
|
||||
|
||||
self.stats = {}
|
||||
self.curdir = py.path.local()
|
||||
self.startdir = self.curdir = py.path.local()
|
||||
if file is None:
|
||||
file = py.std.sys.stdout
|
||||
self._tw = py.io.TerminalWriter(file)
|
||||
@@ -113,9 +111,9 @@ class TerminalReporter:
|
||||
def write_fspath_result(self, fspath, res):
|
||||
if fspath != self.currentfspath:
|
||||
self.currentfspath = fspath
|
||||
#fspath = self.curdir.bestrelpath(fspath)
|
||||
#fspath = self.startdir.bestrelpath(fspath)
|
||||
self._tw.line()
|
||||
#relpath = self.curdir.bestrelpath(fspath)
|
||||
#relpath = self.startdir.bestrelpath(fspath)
|
||||
self._tw.write(fspath + " ")
|
||||
self._tw.write(res)
|
||||
|
||||
@@ -165,9 +163,6 @@ class TerminalReporter:
|
||||
def pytest_deselected(self, items):
|
||||
self.stats.setdefault('deselected', []).extend(items)
|
||||
|
||||
def pytest__teardown_final_logerror(self, report):
|
||||
self.stats.setdefault("error", []).append(report)
|
||||
|
||||
def pytest_runtest_logstart(self, nodeid, location):
|
||||
# ensure that the path is printed before the
|
||||
# 1st test of a module starts running
|
||||
@@ -214,7 +209,7 @@ class TerminalReporter:
|
||||
self.currentfspath = -2
|
||||
|
||||
def pytest_collection(self):
|
||||
if not self.hasmarkup:
|
||||
if not self.hasmarkup and self.config.option.verbose >=1:
|
||||
self.write("collecting ... ", bold=True)
|
||||
|
||||
def pytest_collectreport(self, report):
|
||||
@@ -229,6 +224,9 @@ class TerminalReporter:
|
||||
self.report_collect()
|
||||
|
||||
def report_collect(self, final=False):
|
||||
if self.config.option.verbose < 0:
|
||||
return
|
||||
|
||||
errors = len(self.stats.get('error', []))
|
||||
skipped = len(self.stats.get('skipped', []))
|
||||
if final:
|
||||
@@ -250,6 +248,7 @@ class TerminalReporter:
|
||||
def pytest_collection_modifyitems(self):
|
||||
self.report_collect(True)
|
||||
|
||||
@pytest.mark.trylast
|
||||
def pytest_sessionstart(self, session):
|
||||
self._sessionstarttime = py.std.time.time()
|
||||
if not self.showheader:
|
||||
@@ -265,11 +264,23 @@ class TerminalReporter:
|
||||
getattr(self.config.option, 'pastebin', None):
|
||||
msg += " -- " + str(sys.executable)
|
||||
self.write_line(msg)
|
||||
lines = self.config.hook.pytest_report_header(config=self.config)
|
||||
lines = self.config.hook.pytest_report_header(
|
||||
config=self.config, startdir=self.startdir)
|
||||
lines.reverse()
|
||||
for line in flatten(lines):
|
||||
self.write_line(line)
|
||||
|
||||
def pytest_report_header(self, config):
|
||||
plugininfo = config.pluginmanager._plugin_distinfo
|
||||
if plugininfo:
|
||||
l = []
|
||||
for dist, plugin in plugininfo:
|
||||
name = dist.project_name
|
||||
if name.startswith("pytest-"):
|
||||
name = name[7:]
|
||||
l.append(name)
|
||||
return "plugins: %s" % ", ".join(l)
|
||||
|
||||
def pytest_collection_finish(self, session):
|
||||
if self.config.option.collectonly:
|
||||
self._printcollecteditems(session.items)
|
||||
@@ -289,10 +300,18 @@ class TerminalReporter:
|
||||
# we take care to leave out Instances aka ()
|
||||
# because later versions are going to get rid of them anyway
|
||||
if self.config.option.verbose < 0:
|
||||
for item in items:
|
||||
nodeid = item.nodeid
|
||||
nodeid = nodeid.replace("::()::", "::")
|
||||
self._tw.line(nodeid)
|
||||
if self.config.option.verbose < -1:
|
||||
counts = {}
|
||||
for item in items:
|
||||
name = item.nodeid.split('::', 1)[0]
|
||||
counts[name] = counts.get(name, 0) + 1
|
||||
for name, count in sorted(counts.items()):
|
||||
self._tw.line("%s: %d" % (name, count))
|
||||
else:
|
||||
for item in items:
|
||||
nodeid = item.nodeid
|
||||
nodeid = nodeid.replace("::()::", "::")
|
||||
self._tw.line(nodeid)
|
||||
return
|
||||
stack = []
|
||||
indent = ""
|
||||
@@ -424,27 +443,36 @@ class TerminalReporter:
|
||||
def summary_stats(self):
|
||||
session_duration = py.std.time.time() - self._sessionstarttime
|
||||
|
||||
keys = "failed passed skipped deselected".split()
|
||||
keys = "failed passed skipped deselected xfailed xpassed".split()
|
||||
for key in self.stats.keys():
|
||||
if key not in keys:
|
||||
keys.append(key)
|
||||
parts = []
|
||||
for key in keys:
|
||||
val = self.stats.get(key, None)
|
||||
if val:
|
||||
parts.append("%d %s" %(len(val), key))
|
||||
if key: # setup/teardown reports have an empty key, ignore them
|
||||
val = self.stats.get(key, None)
|
||||
if val:
|
||||
parts.append("%d %s" %(len(val), key))
|
||||
line = ", ".join(parts)
|
||||
# XXX coloring
|
||||
msg = "%s in %.2f seconds" %(line, session_duration)
|
||||
if self.verbosity >= 0:
|
||||
self.write_sep("=", msg, bold=True)
|
||||
else:
|
||||
self.write_line(msg, bold=True)
|
||||
#else:
|
||||
# self.write_line(msg, bold=True)
|
||||
|
||||
def summary_deselected(self):
|
||||
if 'deselected' in self.stats:
|
||||
self.write_sep("=", "%d tests deselected by %r" %(
|
||||
len(self.stats['deselected']), self.config.option.keyword), bold=True)
|
||||
l = []
|
||||
k = self.config.option.keyword
|
||||
if k:
|
||||
l.append("-k%s" % k)
|
||||
m = self.config.option.markexpr
|
||||
if m:
|
||||
l.append("-m %r" % m)
|
||||
if l:
|
||||
self.write_sep("=", "%d tests deselected by %r" %(
|
||||
len(self.stats['deselected']), " ".join(l)), bold=True)
|
||||
|
||||
def repr_pythonversion(v=None):
|
||||
if v is None:
|
||||
|
||||
@@ -40,13 +40,13 @@ class TempdirHandler:
|
||||
basetemp.mkdir()
|
||||
else:
|
||||
basetemp = py.path.local.make_numbered_dir(prefix='pytest-')
|
||||
self._basetemp = t = basetemp
|
||||
self._basetemp = t = basetemp.realpath()
|
||||
self.trace("new basetemp", t)
|
||||
return t
|
||||
|
||||
def finish(self):
|
||||
self.trace("finish")
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
mp = monkeypatch()
|
||||
t = TempdirHandler(config)
|
||||
@@ -64,5 +64,5 @@ def pytest_funcarg__tmpdir(request):
|
||||
name = request._pyfuncitem.name
|
||||
name = py.std.re.sub("[\W]", "_", name)
|
||||
x = request.config._tmpdirhandler.mktemp(name, numbered=True)
|
||||
return x.realpath()
|
||||
return x
|
||||
|
||||
|
||||
@@ -2,6 +2,9 @@
|
||||
import pytest, py
|
||||
import sys, pdb
|
||||
|
||||
# for transfering markers
|
||||
from _pytest.python import transfer_markers
|
||||
|
||||
def pytest_pycollect_makeitem(collector, name, obj):
|
||||
unittest = sys.modules.get('unittest')
|
||||
if unittest is None:
|
||||
@@ -17,11 +20,25 @@ def pytest_pycollect_makeitem(collector, name, obj):
|
||||
return UnitTestCase(name, parent=collector)
|
||||
|
||||
class UnitTestCase(pytest.Class):
|
||||
nofuncargs = True # marker for fixturemanger.getfixtureinfo()
|
||||
# to declare that our children do not support funcargs
|
||||
|
||||
def collect(self):
|
||||
self.session._fixturemanager.parsefactories(self, unittest=True)
|
||||
loader = py.std.unittest.TestLoader()
|
||||
module = self.getparent(pytest.Module).obj
|
||||
cls = self.obj
|
||||
for name in loader.getTestCaseNames(self.obj):
|
||||
x = getattr(self.obj, name)
|
||||
funcobj = getattr(x, 'im_func', x)
|
||||
transfer_markers(funcobj, cls, module)
|
||||
if hasattr(funcobj, 'todo'):
|
||||
pytest.mark.xfail(reason=str(funcobj.todo))(funcobj)
|
||||
yield TestCaseFunction(name, parent=self)
|
||||
|
||||
if getattr(self.obj, 'runTest', None) is not None:
|
||||
yield TestCaseFunction('runTest', parent=self)
|
||||
|
||||
def setup(self):
|
||||
meth = getattr(self.obj, 'setUpClass', None)
|
||||
if meth is not None:
|
||||
@@ -37,17 +54,17 @@ class UnitTestCase(pytest.Class):
|
||||
class TestCaseFunction(pytest.Function):
|
||||
_excinfo = None
|
||||
|
||||
def __init__(self, name, parent):
|
||||
super(TestCaseFunction, self).__init__(name, parent)
|
||||
if hasattr(self._obj, 'todo'):
|
||||
getattr(self._obj, 'im_func', self._obj).xfail = \
|
||||
pytest.mark.xfail(reason=str(self._obj.todo))
|
||||
|
||||
def setup(self):
|
||||
self._testcase = self.parent.obj(self.name)
|
||||
self._obj = getattr(self._testcase, self.name)
|
||||
if hasattr(self._testcase, 'skip'):
|
||||
pytest.skip(self._testcase.skip)
|
||||
if hasattr(self._obj, 'skip'):
|
||||
pytest.skip(self._obj.skip)
|
||||
if hasattr(self._testcase, 'setup_method'):
|
||||
self._testcase.setup_method(self._obj)
|
||||
if hasattr(self, "_request"):
|
||||
self._request._fillfixtures()
|
||||
|
||||
def teardown(self):
|
||||
if hasattr(self._testcase, 'teardown_method'):
|
||||
@@ -83,28 +100,37 @@ class TestCaseFunction(pytest.Function):
|
||||
self._addexcinfo(rawexcinfo)
|
||||
def addFailure(self, testcase, rawexcinfo):
|
||||
self._addexcinfo(rawexcinfo)
|
||||
|
||||
def addSkip(self, testcase, reason):
|
||||
try:
|
||||
pytest.skip(reason)
|
||||
except pytest.skip.Exception:
|
||||
self._addexcinfo(sys.exc_info())
|
||||
def addExpectedFailure(self, testcase, rawexcinfo, reason):
|
||||
|
||||
def addExpectedFailure(self, testcase, rawexcinfo, reason=""):
|
||||
try:
|
||||
pytest.xfail(str(reason))
|
||||
except pytest.xfail.Exception:
|
||||
self._addexcinfo(sys.exc_info())
|
||||
def addUnexpectedSuccess(self, testcase, reason):
|
||||
pass
|
||||
|
||||
def addUnexpectedSuccess(self, testcase, reason=""):
|
||||
self._unexpectedsuccess = reason
|
||||
|
||||
def addSuccess(self, testcase):
|
||||
pass
|
||||
|
||||
def stopTest(self, testcase):
|
||||
pass
|
||||
|
||||
def runtest(self):
|
||||
self._testcase(result=self)
|
||||
|
||||
def _prunetraceback(self, excinfo):
|
||||
pytest.Function._prunetraceback(self, excinfo)
|
||||
excinfo.traceback = excinfo.traceback.filter(lambda x:not x.frame.f_globals.get('__unittest'))
|
||||
traceback = excinfo.traceback.filter(
|
||||
lambda x:not x.frame.f_globals.get('__unittest'))
|
||||
if traceback:
|
||||
excinfo.traceback = traceback
|
||||
|
||||
@pytest.mark.tryfirst
|
||||
def pytest_runtest_makereport(item, call):
|
||||
@@ -120,14 +146,19 @@ def pytest_runtest_protocol(item, __multicall__):
|
||||
ut = sys.modules['twisted.python.failure']
|
||||
Failure__init__ = ut.Failure.__init__.im_func
|
||||
check_testcase_implements_trial_reporter()
|
||||
def excstore(self, exc_value=None, exc_type=None, exc_tb=None):
|
||||
def excstore(self, exc_value=None, exc_type=None, exc_tb=None,
|
||||
captureVars=None):
|
||||
if exc_value is None:
|
||||
self._rawexcinfo = sys.exc_info()
|
||||
else:
|
||||
if exc_type is None:
|
||||
exc_type = type(exc_value)
|
||||
self._rawexcinfo = (exc_type, exc_value, exc_tb)
|
||||
Failure__init__(self, exc_value, exc_type, exc_tb)
|
||||
try:
|
||||
Failure__init__(self, exc_value, exc_type, exc_tb,
|
||||
captureVars=captureVars)
|
||||
except TypeError:
|
||||
Failure__init__(self, exc_value, exc_type, exc_tb)
|
||||
ut.Failure.__init__ = excstore
|
||||
try:
|
||||
return __multicall__.execute()
|
||||
|
||||
@@ -46,7 +46,7 @@ except ImportError:
|
||||
args = [quote(arg) for arg in args]
|
||||
return os.spawnl(os.P_WAIT, sys.executable, *args) == 0
|
||||
|
||||
DEFAULT_VERSION = "0.6.19"
|
||||
DEFAULT_VERSION = "0.6.27"
|
||||
DEFAULT_URL = "http://pypi.python.org/packages/source/d/distribute/"
|
||||
SETUPTOOLS_FAKED_VERSION = "0.6c11"
|
||||
|
||||
@@ -63,7 +63,7 @@ Description: xxx
|
||||
""" % SETUPTOOLS_FAKED_VERSION
|
||||
|
||||
|
||||
def _install(tarball):
|
||||
def _install(tarball, install_args=()):
|
||||
# extracting the tarball
|
||||
tmpdir = tempfile.mkdtemp()
|
||||
log.warn('Extracting in %s', tmpdir)
|
||||
@@ -81,7 +81,7 @@ def _install(tarball):
|
||||
|
||||
# installing
|
||||
log.warn('Installing Distribute')
|
||||
if not _python_cmd('setup.py', 'install'):
|
||||
if not _python_cmd('setup.py', 'install', *install_args):
|
||||
log.warn('Something went wrong during the installation.')
|
||||
log.warn('See the error message above.')
|
||||
finally:
|
||||
@@ -306,6 +306,9 @@ def _create_fake_setuptools_pkg_info(placeholder):
|
||||
log.warn('%s already exists', pkg_info)
|
||||
return
|
||||
|
||||
if not os.access(pkg_info, os.W_OK):
|
||||
log.warn("Don't have permissions to write %s, skipping", pkg_info)
|
||||
|
||||
log.warn('Creating %s', pkg_info)
|
||||
f = open(pkg_info, 'w')
|
||||
try:
|
||||
@@ -474,11 +477,20 @@ def _extractall(self, path=".", members=None):
|
||||
else:
|
||||
self._dbg(1, "tarfile: %s" % e)
|
||||
|
||||
def _build_install_args(argv):
|
||||
install_args = []
|
||||
user_install = '--user' in argv
|
||||
if user_install and sys.version_info < (2,6):
|
||||
log.warn("--user requires Python 2.6 or later")
|
||||
raise SystemExit(1)
|
||||
if user_install:
|
||||
install_args.append('--user')
|
||||
return install_args
|
||||
|
||||
def main(argv, version=DEFAULT_VERSION):
|
||||
"""Install or upgrade setuptools and EasyInstall"""
|
||||
tarball = download_setuptools()
|
||||
_install(tarball)
|
||||
_install(tarball, _build_install_args(argv))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
@@ -40,7 +40,7 @@ clean:
|
||||
-rm -rf $(BUILDDIR)/*
|
||||
|
||||
install: html
|
||||
@rsync -avz _build/html/ pytest.org:/www/pytest.org/latest
|
||||
rsync -avz _build/html/ pytest.org:/www/pytest.org/latest
|
||||
|
||||
installpdf: latexpdf
|
||||
@scp $(BUILDDIR)/latex/pytest.pdf pytest.org:/www/pytest.org/latest
|
||||
@@ -5,6 +5,11 @@ Release announcements
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
release-2.3.0
|
||||
release-2.2.4
|
||||
release-2.2.2
|
||||
release-2.2.1
|
||||
release-2.2.0
|
||||
release-2.1.3
|
||||
release-2.1.2
|
||||
release-2.1.1
|
||||
67
doc/en/announce/release-2.0.1.txt
Normal file
67
doc/en/announce/release-2.0.1.txt
Normal file
@@ -0,0 +1,67 @@
|
||||
py.test 2.0.1: bug fixes
|
||||
===========================================================================
|
||||
|
||||
Welcome to pytest-2.0.1, a maintenance and bug fix release of pytest,
|
||||
a mature testing tool for Python, supporting CPython 2.4-3.2, Jython
|
||||
and latest PyPy interpreters. See extensive docs with tested examples here:
|
||||
|
||||
http://pytest.org/
|
||||
|
||||
If you want to install or upgrade pytest, just type one of::
|
||||
|
||||
pip install -U pytest # or
|
||||
easy_install -U pytest
|
||||
|
||||
Many thanks to all issue reporters and people asking questions or
|
||||
complaining. Particular thanks to Floris Bruynooghe and Ronny Pfannschmidt
|
||||
for their great coding contributions and many others for feedback and help.
|
||||
|
||||
best,
|
||||
holger krekel
|
||||
|
||||
Changes between 2.0.0 and 2.0.1
|
||||
----------------------------------------------
|
||||
|
||||
- refine and unify initial capturing so that it works nicely
|
||||
even if the logging module is used on an early-loaded conftest.py
|
||||
file or plugin.
|
||||
- fix issue12 - show plugin versions with "--version" and
|
||||
"--traceconfig" and also document how to add extra information
|
||||
to reporting test header
|
||||
- fix issue17 (import-* reporting issue on python3) by
|
||||
requiring py>1.4.0 (1.4.1 is going to include it)
|
||||
- fix issue10 (numpy arrays truth checking) by refining
|
||||
assertion interpretation in py lib
|
||||
- fix issue15: make nose compatibility tests compatible
|
||||
with python3 (now that nose-1.0 supports python3)
|
||||
- remove somewhat surprising "same-conftest" detection because
|
||||
it ignores conftest.py when they appear in several subdirs.
|
||||
- improve assertions ("not in"), thanks Floris Bruynooghe
|
||||
- improve behaviour/warnings when running on top of "python -OO"
|
||||
(assertions and docstrings are turned off, leading to potential
|
||||
false positives)
|
||||
- introduce a pytest_cmdline_processargs(args) hook
|
||||
to allow dynamic computation of command line arguments.
|
||||
This fixes a regression because py.test prior to 2.0
|
||||
allowed to set command line options from conftest.py
|
||||
files which so far pytest-2.0 only allowed from ini-files now.
|
||||
- fix issue7: assert failures in doctest modules.
|
||||
unexpected failures in doctests will not generally
|
||||
show nicer, i.e. within the doctest failing context.
|
||||
- fix issue9: setup/teardown functions for an xfail-marked
|
||||
test will report as xfail if they fail but report as normally
|
||||
passing (not xpassing) if they succeed. This only is true
|
||||
for "direct" setup/teardown invocations because teardown_class/
|
||||
teardown_module cannot closely relate to a single test.
|
||||
- fix issue14: no logging errors at process exit
|
||||
- refinements to "collecting" output on non-ttys
|
||||
- refine internal plugin registration and --traceconfig output
|
||||
- introduce a mechanism to prevent/unregister plugins from the
|
||||
command line, see http://pytest.org/latest/plugins.html#cmdunregister
|
||||
- activate resultlog plugin by default
|
||||
- fix regression wrt yielded tests which due to the
|
||||
collection-before-running semantics were not
|
||||
setup as with pytest 1.3.4. Note, however, that
|
||||
the recommended and much cleaner way to do test
|
||||
parametrization remains the "pytest_generate_tests"
|
||||
mechanism, see the docs.
|
||||
@@ -1,4 +1,4 @@
|
||||
py.test 2.0.2: bug fixes, improved xfail/skip expressions, speedups
|
||||
py.test 2.0.2: bug fixes, improved xfail/skip expressions, speed ups
|
||||
===========================================================================
|
||||
|
||||
Welcome to pytest-2.0.2, a maintenance and bug fix release of pytest,
|
||||
@@ -32,17 +32,17 @@ Changes between 2.0.1 and 2.0.2
|
||||
|
||||
Also you can now access module globals from xfail/skipif
|
||||
expressions so that this for example works now::
|
||||
|
||||
|
||||
import pytest
|
||||
import mymodule
|
||||
@pytest.mark.skipif("mymodule.__version__[0] == "1")
|
||||
def test_function():
|
||||
pass
|
||||
|
||||
This will not run the test function if the module's version string
|
||||
This will not run the test function if the module's version string
|
||||
does not start with a "1". Note that specifying a string instead
|
||||
of a boolean expressions allows py.test to report meaningful information
|
||||
when summarizing a test run as to what conditions lead to skipping
|
||||
of a boolean expressions allows py.test to report meaningful information
|
||||
when summarizing a test run as to what conditions lead to skipping
|
||||
(or xfail-ing) tests.
|
||||
|
||||
- fix issue28 - setup_method and pytest_generate_tests work together
|
||||
@@ -65,7 +65,7 @@ Changes between 2.0.1 and 2.0.2
|
||||
- fixed typos in the docs (thanks Victor Garcia, Brianna Laugher) and particular
|
||||
thanks to Laura Creighton who also revieved parts of the documentation.
|
||||
|
||||
- fix slighly wrong output of verbose progress reporting for classes
|
||||
- fix slighly wrong output of verbose progress reporting for classes
|
||||
(thanks Amaury)
|
||||
|
||||
- more precise (avoiding of) deprecation warnings for node.Class|Function accesses
|
||||
@@ -1,7 +1,7 @@
|
||||
py.test 2.1.0: perfected assertions and bug fixes
|
||||
===========================================================================
|
||||
|
||||
Welcome to the relase of pytest-2.1, a mature testing tool for Python,
|
||||
Welcome to the release of pytest-2.1, a mature testing tool for Python,
|
||||
supporting CPython 2.4-3.2, Jython and latest PyPy interpreters. See
|
||||
the improved extensive docs (now also as PDF!) with tested examples here:
|
||||
|
||||
@@ -15,7 +15,7 @@ assert statements in test modules upon import, using a PEP302 hook.
|
||||
See http://pytest.org/assert.html#advanced-assertion-introspection for
|
||||
detailed information. The work has been partly sponsored by my company,
|
||||
merlinux GmbH.
|
||||
|
||||
|
||||
For further details on bug fixes and smaller enhancements see below.
|
||||
|
||||
If you want to install or upgrade pytest, just type one of::
|
||||
@@ -39,10 +39,9 @@ Changes between 2.0.3 and 2.1.0
|
||||
unexpected exceptions
|
||||
- fix issue47: timing output in junitxml for test cases is now correct
|
||||
- fix issue48: typo in MarkInfo repr leading to exception
|
||||
- fix issue49: avoid confusing error when initizaliation partially fails
|
||||
- fix issue49: avoid confusing error when initialization partially fails
|
||||
- fix issue44: env/username expansion for junitxml file path
|
||||
- show releaselevel information in test runs for pypy
|
||||
- reworked doc pages for better navigation and PDF generation
|
||||
- report KeyboardInterrupt even if interrupted during session startup
|
||||
- fix issue 35 - provide PDF doc version and download link from index page
|
||||
|
||||
95
doc/en/announce/release-2.2.0.txt
Normal file
95
doc/en/announce/release-2.2.0.txt
Normal file
@@ -0,0 +1,95 @@
|
||||
py.test 2.2.0: test marking++, parametrization++ and duration profiling
|
||||
===========================================================================
|
||||
|
||||
pytest-2.2.0 is a test-suite compatible release of the popular
|
||||
py.test testing tool. Plugins might need upgrades. It comes
|
||||
with these improvements:
|
||||
|
||||
* easier and more powerful parametrization of tests:
|
||||
|
||||
- new @pytest.mark.parametrize decorator to run tests with different arguments
|
||||
- new metafunc.parametrize() API for parametrizing arguments independently
|
||||
- see examples at http://pytest.org/latest/example/parametrize.html
|
||||
- NOTE that parametrize() related APIs are still a bit experimental
|
||||
and might change in future releases.
|
||||
|
||||
* improved handling of test markers and refined marking mechanism:
|
||||
|
||||
- "-m markexpr" option for selecting tests according to their mark
|
||||
- a new "markers" ini-variable for registering test markers for your project
|
||||
- the new "--strict" bails out with an error if using unregistered markers.
|
||||
- see examples at http://pytest.org/latest/example/markers.html
|
||||
|
||||
* duration profiling: new "--duration=N" option showing the N slowest test
|
||||
execution or setup/teardown calls. This is most useful if you want to
|
||||
find out where your slowest test code is.
|
||||
|
||||
* also 2.2.0 performs more eager calling of teardown/finalizers functions
|
||||
resulting in better and more accurate reporting when they fail
|
||||
|
||||
Besides there is the usual set of bug fixes along with a cleanup of
|
||||
pytest's own test suite allowing it to run on a wider range of environments.
|
||||
|
||||
For general information, see extensive docs with examples here:
|
||||
|
||||
http://pytest.org/
|
||||
|
||||
If you want to install or upgrade pytest you might just type::
|
||||
|
||||
pip install -U pytest # or
|
||||
easy_install -U pytest
|
||||
|
||||
Thanks to Ronny Pfannschmidt, David Burns, Jeff Donner, Daniel Nouri, Alfredo Deza and all who gave feedback or sent bug reports.
|
||||
|
||||
best,
|
||||
holger krekel
|
||||
|
||||
|
||||
notes on incompatibility
|
||||
------------------------------
|
||||
|
||||
While test suites should work unchanged you might need to upgrade plugins:
|
||||
|
||||
* You need a new version of the pytest-xdist plugin (1.7) for distributing
|
||||
test runs.
|
||||
|
||||
* Other plugins might need an upgrade if they implement
|
||||
the ``pytest_runtest_logreport`` hook which now is called unconditionally
|
||||
for the setup/teardown fixture phases of a test. You may choose to
|
||||
ignore setup/teardown failures by inserting "if rep.when != 'call': return"
|
||||
or something similar. Note that most code probably "just" works because
|
||||
the hook was already called for failing setup/teardown phases of a test
|
||||
so a plugin should have been ready to grok such reports already.
|
||||
|
||||
|
||||
Changes between 2.1.3 and 2.2.0
|
||||
----------------------------------------
|
||||
|
||||
- fix issue90: introduce eager tearing down of test items so that
|
||||
teardown function are called earlier.
|
||||
- add an all-powerful metafunc.parametrize function which allows to
|
||||
parametrize test function arguments in multiple steps and therefore
|
||||
from independent plugins and places.
|
||||
- add a @pytest.mark.parametrize helper which allows to easily
|
||||
call a test function with different argument values.
|
||||
- Add examples to the "parametrize" example page, including a quick port
|
||||
of Test scenarios and the new parametrize function and decorator.
|
||||
- introduce registration for "pytest.mark.*" helpers via ini-files
|
||||
or through plugin hooks. Also introduce a "--strict" option which
|
||||
will treat unregistered markers as errors
|
||||
allowing to avoid typos and maintain a well described set of markers
|
||||
for your test suite. See examples at http://pytest.org/latest/mark.html
|
||||
and its links.
|
||||
- issue50: introduce "-m marker" option to select tests based on markers
|
||||
(this is a stricter and more predictable version of "-k" in that "-m"
|
||||
only matches complete markers and has more obvious rules for and/or
|
||||
semantics.
|
||||
- new feature to help optimizing the speed of your tests:
|
||||
--durations=N option for displaying N slowest test calls
|
||||
and setup/teardown methods.
|
||||
- fix issue87: --pastebin now works with python3
|
||||
- fix issue89: --pdb with unexpected exceptions in doctest work more sensibly
|
||||
- fix and cleanup pytest's own test suite to not leak FDs
|
||||
- fix issue83: link to generated funcarg list
|
||||
- fix issue74: pyarg module names are now checked against imp.find_module false positives
|
||||
- fix compatibility with twisted/trial-11.1.0 use cases
|
||||
41
doc/en/announce/release-2.2.1.txt
Normal file
41
doc/en/announce/release-2.2.1.txt
Normal file
@@ -0,0 +1,41 @@
|
||||
pytest-2.2.1: bug fixes, perfect teardowns
|
||||
===========================================================================
|
||||
|
||||
|
||||
pytest-2.2.1 is a minor backward-compatible release of the the py.test
|
||||
testing tool. It contains bug fixes and little improvements, including
|
||||
documentation fixes. If you are using the distributed testing
|
||||
pluginmake sure to upgrade it to pytest-xdist-1.8.
|
||||
|
||||
For general information see here:
|
||||
|
||||
http://pytest.org/
|
||||
|
||||
To install or upgrade pytest:
|
||||
|
||||
pip install -U pytest # or
|
||||
easy_install -U pytest
|
||||
|
||||
Special thanks for helping on this release to Ronny Pfannschmidt, Jurko
|
||||
Gospodnetic and Ralf Schmitt.
|
||||
|
||||
best,
|
||||
holger krekel
|
||||
|
||||
|
||||
Changes between 2.2.0 and 2.2.1
|
||||
----------------------------------------
|
||||
|
||||
- fix issue99 (in pytest and py) internallerrors with resultlog now
|
||||
produce better output - fixed by normalizing pytest_internalerror
|
||||
input arguments.
|
||||
- fix issue97 / traceback issues (in pytest and py) improve traceback output
|
||||
in conjunction with jinja2 and cython which hack tracebacks
|
||||
- fix issue93 (in pytest and pytest-xdist) avoid "delayed teardowns":
|
||||
the final test in a test node will now run its teardown directly
|
||||
instead of waiting for the end of the session. Thanks Dave Hunt for
|
||||
the good reporting and feedback. The pytest_runtest_protocol as well
|
||||
as the pytest_runtest_teardown hooks now have "nextitem" available
|
||||
which will be None indicating the end of the test run.
|
||||
- fix collection crash due to unknown-source collected items, thanks
|
||||
to Ralf Schmitt (fixed by depending on a more recent pylib)
|
||||
43
doc/en/announce/release-2.2.2.txt
Normal file
43
doc/en/announce/release-2.2.2.txt
Normal file
@@ -0,0 +1,43 @@
|
||||
pytest-2.2.2: bug fixes
|
||||
===========================================================================
|
||||
|
||||
pytest-2.2.2 (updated to 2.2.3 to fix packaging issues) is a minor
|
||||
backward-compatible release of the versatile py.test testing tool. It
|
||||
contains bug fixes and a few refinements particularly to reporting with
|
||||
"--collectonly", see below for betails.
|
||||
|
||||
For general information see here:
|
||||
|
||||
http://pytest.org/
|
||||
|
||||
To install or upgrade pytest:
|
||||
|
||||
pip install -U pytest # or
|
||||
easy_install -U pytest
|
||||
|
||||
Special thanks for helping on this release to Ronny Pfannschmidt
|
||||
and Ralf Schmitt and the contributors of issues.
|
||||
|
||||
best,
|
||||
holger krekel
|
||||
|
||||
|
||||
Changes between 2.2.1 and 2.2.2
|
||||
----------------------------------------
|
||||
|
||||
- fix issue101: wrong args to unittest.TestCase test function now
|
||||
produce better output
|
||||
- fix issue102: report more useful errors and hints for when a
|
||||
test directory was renamed and some pyc/__pycache__ remain
|
||||
- fix issue106: allow parametrize to be applied multiple times
|
||||
e.g. from module, class and at function level.
|
||||
- fix issue107: actually perform session scope finalization
|
||||
- don't check in parametrize if indirect parameters are funcarg names
|
||||
- add chdir method to monkeypatch funcarg
|
||||
- fix crash resulting from calling monkeypatch undo a second time
|
||||
- fix issue115: make --collectonly robust against early failure
|
||||
(missing files/directories)
|
||||
- "-qq --collectonly" now shows only files and the number of tests in them
|
||||
- "-q --collectonly" now shows test ids
|
||||
- allow adding of attributes to test reports such that it also works
|
||||
with distributed testing (no upgrade of pytest-xdist needed)
|
||||
39
doc/en/announce/release-2.2.4.txt
Normal file
39
doc/en/announce/release-2.2.4.txt
Normal file
@@ -0,0 +1,39 @@
|
||||
pytest-2.2.4: bug fixes, better junitxml/unittest/python3 compat
|
||||
===========================================================================
|
||||
|
||||
pytest-2.2.4 is a minor backward-compatible release of the versatile
|
||||
py.test testing tool. It contains bug fixes and a few refinements
|
||||
to junitxml reporting, better unittest- and python3 compatibility.
|
||||
|
||||
For general information see here:
|
||||
|
||||
http://pytest.org/
|
||||
|
||||
To install or upgrade pytest:
|
||||
|
||||
pip install -U pytest # or
|
||||
easy_install -U pytest
|
||||
|
||||
Special thanks for helping on this release to Ronny Pfannschmidt
|
||||
and Benjamin Peterson and the contributors of issues.
|
||||
|
||||
best,
|
||||
holger krekel
|
||||
|
||||
Changes between 2.2.3 and 2.2.4
|
||||
-----------------------------------
|
||||
|
||||
- fix error message for rewritten assertions involving the % operator
|
||||
- fix issue 126: correctly match all invalid xml characters for junitxml
|
||||
binary escape
|
||||
- fix issue with unittest: now @unittest.expectedFailure markers should
|
||||
be processed correctly (you can also use @pytest.mark markers)
|
||||
- document integration with the extended distribute/setuptools test commands
|
||||
- fix issue 140: propperly get the real functions
|
||||
of bound classmethods for setup/teardown_class
|
||||
- fix issue #141: switch from the deceased paste.pocoo.org to bpaste.net
|
||||
- fix issue #143: call unconfigure/sessionfinish always when
|
||||
configure/sessionstart where called
|
||||
- fix issue #144: better mangle test ids to junitxml classnames
|
||||
- upgrade distribute_setup.py to 0.6.27
|
||||
|
||||
134
doc/en/announce/release-2.3.0.txt
Normal file
134
doc/en/announce/release-2.3.0.txt
Normal file
@@ -0,0 +1,134 @@
|
||||
pytest-2.3: improved fixtures / better unittest integration
|
||||
=============================================================================
|
||||
|
||||
pytest-2.3 comes with many major improvements for fixture/funcarg management
|
||||
and parametrized testing in Python. It is now easier, more efficient and
|
||||
more predicatable to re-run the same tests with different fixture
|
||||
instances. Also, you can directly declare the caching "scope" of
|
||||
fixtures so that dependent tests throughout your whole test suite can
|
||||
re-use database or other expensive fixture objects with ease. Lastly,
|
||||
it's possible for fixture functions (formerly known as funcarg
|
||||
factories) to use other fixtures, allowing for a completely modular and
|
||||
re-useable fixture design.
|
||||
|
||||
For detailed info and tutorial-style examples, see:
|
||||
|
||||
http://pytest.org/latest/fixture.html
|
||||
|
||||
Moreover, there is now support for using pytest fixtures/funcargs with
|
||||
unittest-style suites, see here for examples:
|
||||
|
||||
http://pytest.org/latest/unittest.html
|
||||
|
||||
Besides, more unittest-test suites are now expected to "simply work"
|
||||
with pytest.
|
||||
|
||||
All changes are backward compatible and you should be able to continue
|
||||
to run your test suites and 3rd party plugins that worked with
|
||||
pytest-2.2.4.
|
||||
|
||||
If you are interested in the precise reasoning (including examples) of the
|
||||
pytest-2.3 fixture evolution, please consult
|
||||
http://pytest.org/latest/funcarg_compare.html
|
||||
|
||||
For general info on installation and getting started:
|
||||
|
||||
http://pytest.org/latest/getting-started.html
|
||||
|
||||
Docs and PDF access as usual at:
|
||||
|
||||
http://pytest.org
|
||||
|
||||
and more details for those already in the knowing of pytest can be found
|
||||
in the CHANGELOG below.
|
||||
|
||||
Particular thanks for this release go to Floris Bruynooghe, Alex Okrushko
|
||||
Carl Meyer, Ronny Pfannschmidt, Benjamin Peterson and Alex Gaynor for helping
|
||||
to get the new features right and well integrated. Ronny and Floris
|
||||
also helped to fix a number of bugs and yet more people helped by
|
||||
providing bug reports.
|
||||
|
||||
have fun,
|
||||
holger krekel
|
||||
|
||||
|
||||
Changes between 2.2.4 and 2.3.0
|
||||
-----------------------------------
|
||||
|
||||
- fix issue202 - better automatic names for parametrized test functions
|
||||
- fix issue139 - introduce @pytest.fixture which allows direct scoping
|
||||
and parametrization of funcarg factories. Introduce new @pytest.setup
|
||||
marker to allow the writing of setup functions which accept funcargs.
|
||||
- fix issue198 - conftest fixtures were not found on windows32 in some
|
||||
circumstances with nested directory structures due to path manipulation issues
|
||||
- fix issue193 skip test functions with were parametrized with empty
|
||||
parameter sets
|
||||
- fix python3.3 compat, mostly reporting bits that previously depended
|
||||
on dict ordering
|
||||
- introduce re-ordering of tests by resource and parametrization setup
|
||||
which takes precedence to the usual file-ordering
|
||||
- fix issue185 monkeypatching time.time does not cause pytest to fail
|
||||
- fix issue172 duplicate call of pytest.setup-decoratored setup_module
|
||||
functions
|
||||
- fix junitxml=path construction so that if tests change the
|
||||
current working directory and the path is a relative path
|
||||
it is constructed correctly from the original current working dir.
|
||||
- fix "python setup.py test" example to cause a proper "errno" return
|
||||
- fix issue165 - fix broken doc links and mention stackoverflow for FAQ
|
||||
- catch unicode-issues when writing failure representations
|
||||
to terminal to prevent the whole session from crashing
|
||||
- fix xfail/skip confusion: a skip-mark or an imperative pytest.skip
|
||||
will now take precedence before xfail-markers because we
|
||||
can't determine xfail/xpass status in case of a skip. see also:
|
||||
http://stackoverflow.com/questions/11105828/in-py-test-when-i-explicitly-skip-a-test-that-is-marked-as-xfail-how-can-i-get
|
||||
|
||||
- always report installed 3rd party plugins in the header of a test run
|
||||
|
||||
- fix issue160: a failing setup of an xfail-marked tests should
|
||||
be reported as xfail (not xpass)
|
||||
|
||||
- fix issue128: show captured output when capsys/capfd are used
|
||||
|
||||
- fix issue179: propperly show the dependency chain of factories
|
||||
|
||||
- pluginmanager.register(...) now raises ValueError if the
|
||||
plugin has been already registered or the name is taken
|
||||
|
||||
- fix issue159: improve http://pytest.org/latest/faq.html
|
||||
especially with respect to the "magic" history, also mention
|
||||
pytest-django, trial and unittest integration.
|
||||
|
||||
- make request.keywords and node.keywords writable. All descendant
|
||||
collection nodes will see keyword values. Keywords are dictionaries
|
||||
containing markers and other info.
|
||||
|
||||
- fix issue 178: xml binary escapes are now wrapped in py.xml.raw
|
||||
|
||||
- fix issue 176: correctly catch the builtin AssertionError
|
||||
even when we replaced AssertionError with a subclass on the
|
||||
python level
|
||||
|
||||
- factory discovery no longer fails with magic global callables
|
||||
that provide no sane __code__ object (mock.call for example)
|
||||
|
||||
- fix issue 182: testdir.inprocess_run now considers passed plugins
|
||||
|
||||
- fix issue 188: ensure sys.exc_info is clear on python2
|
||||
before calling into a test
|
||||
|
||||
- fix issue 191: add unittest TestCase runTest method support
|
||||
- fix issue 156: monkeypatch correctly handles class level descriptors
|
||||
|
||||
- reporting refinements:
|
||||
|
||||
- pytest_report_header now receives a "startdir" so that
|
||||
you can use startdir.bestrelpath(yourpath) to show
|
||||
nice relative path
|
||||
|
||||
- allow plugins to implement both pytest_report_header and
|
||||
pytest_sessionstart (sessionstart is invoked first).
|
||||
|
||||
- don't show deselected reason line if there is none
|
||||
|
||||
- py.test -vv will show all of assert comparisations instead of truncating
|
||||
|
||||
@@ -10,14 +10,15 @@ py.test reference documentation
|
||||
builtin.txt
|
||||
customize.txt
|
||||
assert.txt
|
||||
funcargs.txt
|
||||
fixture.txt
|
||||
parametrize.txt
|
||||
xunit_setup.txt
|
||||
capture.txt
|
||||
monkeypatch.txt
|
||||
xdist.txt
|
||||
tmpdir.txt
|
||||
skipping.txt
|
||||
mark.txt
|
||||
skipping.txt
|
||||
recwarn.txt
|
||||
unittest.txt
|
||||
nose.txt
|
||||
@@ -2,7 +2,10 @@
|
||||
The writing and reporting of assertions in tests
|
||||
==================================================
|
||||
|
||||
.. _`assertfeedback`:
|
||||
.. _`assert with the assert statement`:
|
||||
.. _`assert`:
|
||||
|
||||
|
||||
Asserting with the ``assert`` statement
|
||||
---------------------------------------------------------
|
||||
@@ -23,8 +26,8 @@ you will see the return value of the function call::
|
||||
|
||||
$ py.test test_assert1.py
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
collecting ... collected 1 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 1 items
|
||||
|
||||
test_assert1.py F
|
||||
|
||||
@@ -37,7 +40,7 @@ you will see the return value of the function call::
|
||||
E + where 3 = f()
|
||||
|
||||
test_assert1.py:5: AssertionError
|
||||
========================= 1 failed in 0.03 seconds =========================
|
||||
========================= 1 failed in 0.01 seconds =========================
|
||||
|
||||
py.test has support for showing the values of the most common subexpressions
|
||||
including calls, attributes, comparisons, and binary and unary
|
||||
@@ -54,6 +57,8 @@ will be simply shown in the traceback.
|
||||
|
||||
See :ref:`assert-details` for more information on assertion introspection.
|
||||
|
||||
.. _`assertraises`:
|
||||
|
||||
Assertions about expected exceptions
|
||||
------------------------------------------
|
||||
|
||||
@@ -73,7 +78,7 @@ and if you need to have access to the actual exception info you may use::
|
||||
|
||||
# do checks related to excinfo.type, excinfo.value, excinfo.traceback
|
||||
|
||||
If you want to write test code that works on Python2.4 as well,
|
||||
If you want to write test code that works on Python 2.4 as well,
|
||||
you may also use two other ways to test for an expected exception::
|
||||
|
||||
pytest.raises(ExpectedException, func, *args, **kwargs)
|
||||
@@ -105,8 +110,8 @@ if you run this module::
|
||||
|
||||
$ py.test test_assert2.py
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
collecting ... collected 1 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 1 items
|
||||
|
||||
test_assert2.py F
|
||||
|
||||
@@ -124,7 +129,7 @@ if you run this module::
|
||||
E '5'
|
||||
|
||||
test_assert2.py:5: AssertionError
|
||||
========================= 1 failed in 0.02 seconds =========================
|
||||
========================= 1 failed in 0.01 seconds =========================
|
||||
|
||||
Special comparisons are done for a number of cases:
|
||||
|
||||
@@ -168,7 +173,6 @@ you can run the test module and get the custom output defined in
|
||||
the conftest file::
|
||||
|
||||
$ py.test -q test_foocompare.py
|
||||
collecting ... collected 1 items
|
||||
F
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_compare _______________________________
|
||||
@@ -181,7 +185,6 @@ the conftest file::
|
||||
E vals: 1 != 2
|
||||
|
||||
test_foocompare.py:8: AssertionError
|
||||
1 failed in 0.02 seconds
|
||||
|
||||
.. _assert-details:
|
||||
.. _`assert introspection`:
|
||||
186
doc/en/attic_fixtures.txt
Normal file
186
doc/en/attic_fixtures.txt
Normal file
@@ -0,0 +1,186 @@
|
||||
|
||||
**Test classes, modules or whole projects can make use of
|
||||
one or more fixtures**. All required fixture functions will execute
|
||||
before a test from the specifying context executes. As You can use this
|
||||
to make tests operate from a pre-initialized directory or with
|
||||
certain environment variables or with pre-configured global application
|
||||
settings.
|
||||
|
||||
For example, the Django_ project requires database
|
||||
initialization to be able to import from and use its model objects.
|
||||
For that, the `pytest-django`_ plugin provides fixtures which your
|
||||
project can then easily depend or extend on, simply by referencing the
|
||||
name of the particular fixture.
|
||||
|
||||
Fixture functions have limited visilibity which depends on where they
|
||||
are defined. If they are defined on a test class, only its test methods
|
||||
may use it. A fixture defined in a module can only be used
|
||||
from that test module. A fixture defined in a conftest.py file
|
||||
can only be used by the tests below the directory of that file.
|
||||
Lastly, plugins can define fixtures which are available across all
|
||||
projects.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Python, Java and many other languages support a so called xUnit_ style
|
||||
for providing a fixed state, `test fixtures`_, for running tests. It
|
||||
typically involves calling a autouse function ahead and a teardown
|
||||
function after test execute. In 2005 pytest introduced a scope-specific
|
||||
model of automatically detecting and calling autouse and teardown
|
||||
functions on a per-module, class or function basis. The Python unittest
|
||||
package and nose have subsequently incorporated them. This model
|
||||
remains supported by pytest as :ref:`classic xunit`.
|
||||
|
||||
One property of xunit fixture functions is that they work implicitely
|
||||
by preparing global state or setting attributes on TestCase objects.
|
||||
By contrast, pytest provides :ref:`funcargs` which allow to
|
||||
dependency-inject application test state into test functions or
|
||||
methods as function arguments. If your application is sufficiently modular
|
||||
or if you are creating a new project, we recommend you now rather head over to
|
||||
:ref:`funcargs` instead because many pytest users agree that using this
|
||||
paradigm leads to better application and test organisation.
|
||||
|
||||
However, not all programs and frameworks work and can be tested in
|
||||
a fully modular way. They rather require preparation of global state
|
||||
like database autouse on which further fixtures like preparing application
|
||||
specific tables or wrapping tests in transactions can take place. For those
|
||||
needs, pytest-2.3 now supports new **fixture functions** which come with
|
||||
a ton of improvements over classic xunit fixture writing. Fixture functions:
|
||||
|
||||
- allow to separate different autouse concerns into multiple modular functions
|
||||
|
||||
- can receive and fully interoperate with :ref:`funcargs <resources>`,
|
||||
|
||||
- are called multiple times if its funcargs are parametrized,
|
||||
|
||||
- don't need to be defined directly in your test classes or modules,
|
||||
they can also be defined in a plugin or :ref:`conftest.py <conftest.py>` files and get called
|
||||
|
||||
- are called on a per-session, per-module, per-class or per-function basis
|
||||
by means of a simple "scope" declaration.
|
||||
|
||||
- can access the :ref:`request <request>` object which allows to
|
||||
introspect and interact with the (scoped) testcontext.
|
||||
|
||||
- can add cleanup functions which will be invoked when the last test
|
||||
of the fixture test context has finished executing.
|
||||
|
||||
All of these features are now demonstrated by little examples.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
test modules accessing a global resource
|
||||
-------------------------------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
Relying on `global state is considered bad programming practise <http://en.wikipedia.org/wiki/Global_variable>`_ but when you work with an application
|
||||
that relies on it you often have no choice.
|
||||
|
||||
If you want test modules to access a global resource,
|
||||
you can stick the resource to the module globals in
|
||||
a per-module autouse function. We use a :ref:`resource factory
|
||||
<@pytest.fixture>` to create our global resource::
|
||||
|
||||
# content of conftest.py
|
||||
import pytest
|
||||
|
||||
class GlobalResource:
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def globresource():
|
||||
return GlobalResource()
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def setresource(request, globresource):
|
||||
request.module.globresource = globresource
|
||||
|
||||
Now any test module can access ``globresource`` as a module global::
|
||||
|
||||
# content of test_glob.py
|
||||
|
||||
def test_1():
|
||||
print ("test_1 %s" % globresource)
|
||||
def test_2():
|
||||
print ("test_2 %s" % globresource)
|
||||
|
||||
Let's run this module without output-capturing::
|
||||
|
||||
$ py.test -qs test_glob.py
|
||||
FF
|
||||
================================= FAILURES =================================
|
||||
__________________________________ test_1 __________________________________
|
||||
|
||||
def test_1():
|
||||
> print ("test_1 %s" % globresource)
|
||||
E NameError: global name 'globresource' is not defined
|
||||
|
||||
test_glob.py:3: NameError
|
||||
__________________________________ test_2 __________________________________
|
||||
|
||||
def test_2():
|
||||
> print ("test_2 %s" % globresource)
|
||||
E NameError: global name 'globresource' is not defined
|
||||
|
||||
test_glob.py:5: NameError
|
||||
|
||||
The two tests see the same global ``globresource`` object.
|
||||
|
||||
Parametrizing the global resource
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
We extend the previous example and add parametrization to the globresource
|
||||
factory and also add a finalizer::
|
||||
|
||||
# content of conftest.py
|
||||
|
||||
import pytest
|
||||
|
||||
class GlobalResource:
|
||||
def __init__(self, param):
|
||||
self.param = param
|
||||
|
||||
@pytest.fixture(scope="session", params=[1,2])
|
||||
def globresource(request):
|
||||
g = GlobalResource(request.param)
|
||||
def fin():
|
||||
print "finalizing", g
|
||||
request.addfinalizer(fin)
|
||||
return g
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def setresource(request, globresource):
|
||||
request.module.globresource = globresource
|
||||
|
||||
And then re-run our test module::
|
||||
|
||||
$ py.test -qs test_glob.py
|
||||
FF
|
||||
================================= FAILURES =================================
|
||||
__________________________________ test_1 __________________________________
|
||||
|
||||
def test_1():
|
||||
> print ("test_1 %s" % globresource)
|
||||
E NameError: global name 'globresource' is not defined
|
||||
|
||||
test_glob.py:3: NameError
|
||||
__________________________________ test_2 __________________________________
|
||||
|
||||
def test_2():
|
||||
> print ("test_2 %s" % globresource)
|
||||
E NameError: global name 'globresource' is not defined
|
||||
|
||||
test_glob.py:5: NameError
|
||||
|
||||
We are now running the two tests twice with two different global resource
|
||||
instances. Note that the tests are ordered such that only
|
||||
one instance is active at any given time: the finalizer of
|
||||
the first globresource instance is called before the second
|
||||
instance is created and sent to the autouse functions.
|
||||
|
||||
@@ -1,34 +1,78 @@
|
||||
|
||||
.. _`pytest helpers`:
|
||||
|
||||
Pytest builtin helpers
|
||||
Pytest API and builtin fixtures
|
||||
================================================
|
||||
|
||||
builtin pytest.* functions and helping objects
|
||||
-----------------------------------------------------
|
||||
This is a list of ``pytest.*`` API functions and fixtures.
|
||||
|
||||
You can always use an interactive Python prompt and type::
|
||||
For information on plugin hooks and objects, see :ref:`plugins`.
|
||||
|
||||
For information on the ``pytest.mark`` mechanism, see :ref:`mark`.
|
||||
|
||||
For the below objects, you can also interactively ask for help, e.g. by
|
||||
typing on the Python interactive prompt something like::
|
||||
|
||||
import pytest
|
||||
help(pytest)
|
||||
|
||||
to get an overview on the globally available helpers.
|
||||
.. currentmodule:: pytest
|
||||
|
||||
.. automodule:: pytest
|
||||
:members:
|
||||
Invoking pytest interactively
|
||||
---------------------------------------------------
|
||||
|
||||
Builtin function arguments
|
||||
.. autofunction:: main
|
||||
|
||||
More examples at :ref:`pytest.main-usage`
|
||||
|
||||
|
||||
Helpers for assertions about Exceptions/Warnings
|
||||
--------------------------------------------------------
|
||||
|
||||
.. autofunction:: raises
|
||||
|
||||
Examples at :ref:`assertraises`.
|
||||
|
||||
.. autofunction:: deprecated_call
|
||||
|
||||
Raising a specific test outcome
|
||||
--------------------------------------
|
||||
|
||||
You can use the following functions in your test, fixture or setup
|
||||
functions to force a certain test outcome. Note that most often
|
||||
you can rather use declarative marks, see :ref:`skipping`.
|
||||
|
||||
.. autofunction:: _pytest.runner.fail
|
||||
.. autofunction:: _pytest.runner.skip
|
||||
.. autofunction:: _pytest.runner.importorskip
|
||||
.. autofunction:: _pytest.skipping.xfail
|
||||
.. autofunction:: _pytest.runner.exit
|
||||
|
||||
fixtures and requests
|
||||
-----------------------------------------------------
|
||||
|
||||
You can ask for available builtin or project-custom
|
||||
:ref:`function arguments <funcargs>` by typing::
|
||||
To mark a fixture function:
|
||||
|
||||
$ py.test --funcargs
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
collected 0 items
|
||||
pytestconfig
|
||||
the pytest config object with access to command line opts.
|
||||
.. autofunction:: _pytest.python.fixture
|
||||
|
||||
Tutorial at :ref:`fixtures`.
|
||||
|
||||
The ``request`` object that can be used from fixture functions.
|
||||
|
||||
.. autoclass:: _pytest.python.FixtureRequest()
|
||||
:members:
|
||||
|
||||
|
||||
.. _builtinfixtures:
|
||||
.. _builtinfuncargs:
|
||||
|
||||
Builtin fixtures/function arguments
|
||||
-----------------------------------------
|
||||
|
||||
You can ask for available builtin or project-custom
|
||||
:ref:`fixtures <fixtures>` by typing::
|
||||
|
||||
$ py.test -q --fixtures
|
||||
capsys
|
||||
enables capturing of writes to sys.stdout/sys.stderr and makes
|
||||
captured output available via ``capsys.readouterr()`` method calls
|
||||
@@ -39,13 +83,6 @@ You can ask for available builtin or project-custom
|
||||
captured output available via ``capsys.readouterr()`` method calls
|
||||
which return a ``(out, err)`` tuple.
|
||||
|
||||
tmpdir
|
||||
return a temporary directory path object
|
||||
which is unique to each test function invocation,
|
||||
created as a sub directory of the base temporary
|
||||
directory. The returned object is a `py.path.local`_
|
||||
path object.
|
||||
|
||||
monkeypatch
|
||||
The returned ``monkeypatch`` funcarg provides these
|
||||
helper methods to modify objects, dictionaries or os.environ::
|
||||
@@ -57,12 +94,15 @@ You can ask for available builtin or project-custom
|
||||
monkeypatch.setenv(name, value, prepend=False)
|
||||
monkeypatch.delenv(name, value, raising=True)
|
||||
monkeypatch.syspath_prepend(path)
|
||||
monkeypatch.chdir(path)
|
||||
|
||||
All modifications will be undone after the requesting
|
||||
test function has finished. The ``raising``
|
||||
parameter determines if a KeyError or AttributeError
|
||||
will be raised if the set/deletion operation has no target.
|
||||
|
||||
pytestconfig
|
||||
the pytest config object with access to command line opts.
|
||||
recwarn
|
||||
Return a WarningsRecorder instance that provides these methods:
|
||||
|
||||
@@ -72,7 +112,11 @@ You can ask for available builtin or project-custom
|
||||
See http://docs.python.org/library/warnings.html for information
|
||||
on warning categories.
|
||||
|
||||
cov
|
||||
A pytest funcarg that provides access to the underlying coverage object.
|
||||
tmpdir
|
||||
return a temporary directory path object
|
||||
which is unique to each test function invocation,
|
||||
created as a sub directory of the base temporary
|
||||
directory. The returned object is a `py.path.local`_
|
||||
path object.
|
||||
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
@@ -64,8 +64,8 @@ of the failing function and hide the other one::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
collecting ... collected 2 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 2 items
|
||||
|
||||
test_module.py .F
|
||||
|
||||
@@ -78,8 +78,8 @@ of the failing function and hide the other one::
|
||||
|
||||
test_module.py:9: AssertionError
|
||||
----------------------------- Captured stdout ------------------------------
|
||||
setting up <function test_func2 at 0x10130ccf8>
|
||||
==================== 1 failed, 1 passed in 0.03 seconds ====================
|
||||
setting up <function test_func2 at 0x2d98050>
|
||||
==================== 1 failed, 1 passed in 0.01 seconds ====================
|
||||
|
||||
Accessing captured output from a test function
|
||||
---------------------------------------------------
|
||||
@@ -4,4 +4,4 @@
|
||||
Changelog history
|
||||
=================================
|
||||
|
||||
.. include:: ../CHANGELOG
|
||||
.. include:: ../../CHANGELOG
|
||||
287
doc/en/conf.py
Normal file
287
doc/en/conf.py
Normal file
@@ -0,0 +1,287 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# pytest documentation build configuration file, created by
|
||||
# sphinx-quickstart on Fri Oct 8 17:54:28 2010.
|
||||
#
|
||||
# This file is execfile()d with the current directory set to its containing dir.
|
||||
#
|
||||
# Note that not all possible configuration values are present in this
|
||||
# autogenerated file.
|
||||
#
|
||||
# All configuration values have a default; values that are commented out
|
||||
# serve to show the default.
|
||||
|
||||
# The version info for the project you're documenting, acts as replacement for
|
||||
# |version| and |release|, also used in various other places throughout the
|
||||
# built documents.
|
||||
#
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
# The short X.Y version.
|
||||
version = release = "2.3.0"
|
||||
|
||||
import sys, os
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
#sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
autodoc_member_order = "bysource"
|
||||
todo_include_todos = 1
|
||||
|
||||
# -- General configuration -----------------------------------------------------
|
||||
|
||||
# If your documentation needs a minimal Sphinx version, state it here.
|
||||
#needs_sphinx = '1.0'
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be extensions
|
||||
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
||||
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.autosummary',
|
||||
'sphinx.ext.intersphinx', 'sphinx.ext.viewcode']
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
|
||||
# The suffix of source filenames.
|
||||
source_suffix = '.txt'
|
||||
|
||||
# The encoding of source files.
|
||||
#source_encoding = 'utf-8-sig'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'contents'
|
||||
|
||||
# General information about the project.
|
||||
project = u'pytest'
|
||||
copyright = u'2011, holger krekel et alii'
|
||||
|
||||
|
||||
|
||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||
# for a list of supported languages.
|
||||
#language = None
|
||||
|
||||
# There are two options for replacing |today|: either, you set today to some
|
||||
# non-false value, then it is used:
|
||||
#today = ''
|
||||
# Else, today_fmt is used as the format for a strftime call.
|
||||
#today_fmt = '%B %d, %Y'
|
||||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
exclude_patterns = ['links.inc', '_build', 'naming20.txt', 'test/*',
|
||||
"old_*",
|
||||
'*attic*',
|
||||
'*/attic*',
|
||||
'funcargs.txt',
|
||||
'setup.txt',
|
||||
'example/remoteinterp.txt',
|
||||
]
|
||||
|
||||
|
||||
# The reST default role (used for this markup: `text`) to use for all documents.
|
||||
#default_role = None
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
#add_function_parentheses = True
|
||||
|
||||
# If true, the current module name will be prepended to all description
|
||||
# unit titles (such as .. function::).
|
||||
add_module_names = False
|
||||
|
||||
# If true, sectionauthor and moduleauthor directives will be shown in the
|
||||
# output. They are ignored by default.
|
||||
#show_authors = False
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
|
||||
|
||||
# A list of ignored prefixes for module index sorting.
|
||||
#modindex_common_prefix = []
|
||||
|
||||
|
||||
# -- Options for HTML output ---------------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
html_theme = 'sphinxdoc'
|
||||
|
||||
# Theme options are theme-specific and customize the look and feel of a theme
|
||||
# further. For a list of options available for each theme, see the
|
||||
# documentation.
|
||||
html_theme_options = {}
|
||||
|
||||
# Add any paths that contain custom themes here, relative to this directory.
|
||||
#html_theme_path = []
|
||||
|
||||
# The name for this set of Sphinx documents. If None, it defaults to
|
||||
# "<project> v<release> documentation".
|
||||
html_title = None
|
||||
|
||||
# A shorter title for the navigation bar. Default is the same as html_title.
|
||||
html_short_title = "pytest-%s" % release
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top
|
||||
# of the sidebar.
|
||||
#html_logo = None
|
||||
|
||||
# The name of an image file (within the static path) to use as favicon of the
|
||||
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
||||
# pixels large.
|
||||
#html_favicon = None
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ['_static']
|
||||
|
||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
||||
# using the given strftime format.
|
||||
#html_last_updated_fmt = '%b %d, %Y'
|
||||
|
||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||
# typographically correct entities.
|
||||
#html_use_smartypants = True
|
||||
|
||||
# Custom sidebar templates, maps document names to template names.
|
||||
#html_sidebars = {}
|
||||
#html_sidebars = {'index': 'indexsidebar.html'}
|
||||
|
||||
# Additional templates that should be rendered to pages, maps page names to
|
||||
# template names.
|
||||
#html_additional_pages = {}
|
||||
#html_additional_pages = {'index': 'index.html'}
|
||||
|
||||
|
||||
# If false, no module index is generated.
|
||||
html_domain_indices = True
|
||||
|
||||
# If false, no index is generated.
|
||||
html_use_index = False
|
||||
|
||||
# If true, the index is split into individual pages for each letter.
|
||||
#html_split_index = False
|
||||
|
||||
# If true, links to the reST sources are added to the pages.
|
||||
html_show_sourcelink = False
|
||||
|
||||
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
|
||||
#html_show_sphinx = True
|
||||
|
||||
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
|
||||
#html_show_copyright = True
|
||||
|
||||
# If true, an OpenSearch description file will be output, and all pages will
|
||||
# contain a <link> tag referring to it. The value of this option must be the
|
||||
# base URL from which the finished HTML is served.
|
||||
#html_use_opensearch = ''
|
||||
|
||||
# This is the file name suffix for HTML files (e.g. ".xhtml").
|
||||
#html_file_suffix = None
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = 'pytestdoc'
|
||||
|
||||
|
||||
# -- Options for LaTeX output --------------------------------------------------
|
||||
|
||||
# The paper size ('letter' or 'a4').
|
||||
#latex_paper_size = 'letter'
|
||||
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
#latex_font_size = '10pt'
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title, author, documentclass [howto/manual]).
|
||||
latex_documents = [
|
||||
('contents', 'pytest.tex', u'pytest Documentation',
|
||||
u'holger krekel et alii', 'manual'),
|
||||
]
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top of
|
||||
# the title page.
|
||||
#latex_logo = None
|
||||
|
||||
# For "manual" documents, if this is true, then toplevel headings are parts,
|
||||
# not chapters.
|
||||
#latex_use_parts = False
|
||||
|
||||
# If true, show page references after internal links.
|
||||
#latex_show_pagerefs = False
|
||||
|
||||
# If true, show URL addresses after external links.
|
||||
#latex_show_urls = False
|
||||
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
#latex_preamble = ''
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
#latex_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
latex_domain_indices = False
|
||||
|
||||
# -- Options for manual page output --------------------------------------------
|
||||
|
||||
# One entry per manual page. List of tuples
|
||||
# (source start file, name, description, authors, manual section).
|
||||
man_pages = [
|
||||
('usage', 'pytest', u'pytest usage',
|
||||
[u'holger krekel at merlinux eu'], 1)
|
||||
]
|
||||
|
||||
|
||||
# -- Options for Epub output ---------------------------------------------------
|
||||
|
||||
# Bibliographic Dublin Core info.
|
||||
epub_title = u'pytest'
|
||||
epub_author = u'holger krekel at merlinux eu'
|
||||
epub_publisher = u'holger krekel at merlinux eu'
|
||||
epub_copyright = u'2011, holger krekel et alii'
|
||||
|
||||
# The language of the text. It defaults to the language option
|
||||
# or en if the language is not set.
|
||||
#epub_language = ''
|
||||
|
||||
# The scheme of the identifier. Typical schemes are ISBN or URL.
|
||||
#epub_scheme = ''
|
||||
|
||||
# The unique identifier of the text. This can be a ISBN number
|
||||
# or the project homepage.
|
||||
#epub_identifier = ''
|
||||
|
||||
# A unique identification for the text.
|
||||
#epub_uid = ''
|
||||
|
||||
# HTML files that should be inserted before the pages created by sphinx.
|
||||
# The format is a list of tuples containing the path and title.
|
||||
#epub_pre_files = []
|
||||
|
||||
# HTML files shat should be inserted after the pages created by sphinx.
|
||||
# The format is a list of tuples containing the path and title.
|
||||
#epub_post_files = []
|
||||
|
||||
# A list of files that should not be packed into the epub file.
|
||||
#epub_exclude_files = []
|
||||
|
||||
# The depth of the table of contents in toc.ncx.
|
||||
#epub_tocdepth = 3
|
||||
|
||||
# Allow duplicate toc entries.
|
||||
#epub_tocdup = True
|
||||
|
||||
|
||||
# Example configuration for intersphinx: refer to the Python standard library.
|
||||
intersphinx_mapping = {'python': ('http://docs.python.org/', None),
|
||||
# 'lib': ("http://docs.python.org/2.7library/", None),
|
||||
}
|
||||
|
||||
|
||||
def setup(app):
|
||||
#from sphinx.ext.autodoc import cut_lines
|
||||
#app.connect('autodoc-process-docstring', cut_lines(4, what=['module']))
|
||||
app.add_description_unit('confval', 'confval',
|
||||
objname='configuration value',
|
||||
indextemplate='pair: %s; configuration value')
|
||||
@@ -9,6 +9,10 @@ Contact channels
|
||||
2.0 and above). You may also peek at the `old issue tracker`_ but please
|
||||
don't submit bugs there anymore.
|
||||
|
||||
- `pytest on stackoverflow.com <http://stackoverflow.com/search?q=pytest>`_
|
||||
to post questions with the tag ``pytest``. New Questions will usually
|
||||
be seen by pytest users or developers.
|
||||
|
||||
- `Testing In Python`_: a mailing list for Python testing tools and discussion.
|
||||
|
||||
- `py-dev developers list`_ pytest specific announcements and discussions.
|
||||
@@ -1,10 +1,10 @@
|
||||
|
||||
.. _toc:
|
||||
|
||||
Full pytest documenation
|
||||
========================
|
||||
Full pytest documentation
|
||||
===========================
|
||||
|
||||
`Download latest version as PDF <http://media.readthedocs.org/pdf/pytest/latest/pytest.pdf>`_
|
||||
`Download latest version as PDF <pytest.pdf>`_
|
||||
|
||||
.. `Download latest version as EPUB <http://media.readthedocs.org/epub/pytest/latest/pytest.epub>`_
|
||||
|
||||
@@ -12,11 +12,12 @@ Full pytest documenation
|
||||
:maxdepth: 2
|
||||
|
||||
overview
|
||||
example/index
|
||||
apiref
|
||||
plugins
|
||||
example/index
|
||||
talks
|
||||
develop
|
||||
funcarg_compare.txt
|
||||
announce/index
|
||||
|
||||
.. toctree::
|
||||
@@ -23,8 +23,9 @@ It looks for file basenames in this order::
|
||||
tox.ini
|
||||
setup.cfg
|
||||
|
||||
Searching stops when the first ``[pytest]`` section is found.
|
||||
There is no merging of configuration values from multiple files. Example::
|
||||
Searching stops when the first ``[pytest]`` section is found in any of
|
||||
these files. There is no merging of configuration values from multiple
|
||||
files. Example::
|
||||
|
||||
py.test path/to/testdir
|
||||
|
||||
@@ -64,13 +65,13 @@ Builtin configuration file options
|
||||
|
||||
.. confval:: minversion
|
||||
|
||||
specifies a minimal pytest version required for running tests.
|
||||
Specifies a minimal pytest version required for running tests.
|
||||
|
||||
minversion = 2.1 # will fail if we run with pytest-2.0
|
||||
|
||||
.. confval:: addopts
|
||||
|
||||
add the specified ``OPTS`` to the set of command line arguments as if they
|
||||
Add the specified ``OPTS`` to the set of command line arguments as if they
|
||||
had been specified by the user. Example: if you have this ini file content::
|
||||
|
||||
[pytest]
|
||||
@@ -94,7 +95,7 @@ Builtin configuration file options
|
||||
[seq] matches any character in seq
|
||||
[!seq] matches any char not in seq
|
||||
|
||||
Default patterns are ``.* _* CVS {args}``. Setting a ``norecurse``
|
||||
Default patterns are ``.* _* CVS {args}``. Setting a ``norecursedir``
|
||||
replaces the default. Here is an example of how to avoid
|
||||
certain directories::
|
||||
|
||||
@@ -121,4 +122,3 @@ Builtin configuration file options
|
||||
and methods are considered as test modules.
|
||||
|
||||
See :ref:`change naming conventions` for examples.
|
||||
|
||||
@@ -44,10 +44,9 @@ then you can just invoke ``py.test`` without command line options::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
collecting ... collected 1 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 1 items
|
||||
|
||||
mymodule.py .
|
||||
|
||||
========================= 1 passed in 0.06 seconds =========================
|
||||
[?1034h
|
||||
========================= 1 passed in 0.02 seconds =========================
|
||||
@@ -15,7 +15,7 @@ def test_generative(param1, param2):
|
||||
assert param1 * 2 < param2
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
if 'param1' in metafunc.funcargnames:
|
||||
if 'param1' in metafunc.fixturenames:
|
||||
metafunc.addcall(funcargs=dict(param1=3, param2=6))
|
||||
|
||||
class TestFailing(object):
|
||||
@@ -13,9 +13,9 @@ example: specifying and selecting acceptance tests
|
||||
help="run (slow) acceptance tests")
|
||||
|
||||
def pytest_funcarg__accept(request):
|
||||
return AcceptFuncarg(request)
|
||||
return AcceptFixture(request)
|
||||
|
||||
class AcceptFuncarg:
|
||||
class AcceptFixture:
|
||||
def __init__(self, request):
|
||||
if not request.config.option.acceptance:
|
||||
pytest.skip("specify -A to run acceptance tests")
|
||||
@@ -39,7 +39,7 @@ and the actual test function example:
|
||||
If you run this test without specifying a command line option
|
||||
the test will get skipped with an appropriate message. Otherwise
|
||||
you can start to add convenience and test support methods
|
||||
to your AcceptFuncarg and drive running of tools or
|
||||
to your AcceptFixture and drive running of tools or
|
||||
applications and provide ways to do assertions about
|
||||
the output.
|
||||
|
||||
198
doc/en/example/attic_remoteinterpreter.txt
Normal file
198
doc/en/example/attic_remoteinterpreter.txt
Normal file
@@ -0,0 +1,198 @@
|
||||
|
||||
.. highlightlang:: python
|
||||
|
||||
.. _myapp:
|
||||
|
||||
Building an SSH connecting Application fixture
|
||||
==========================================================
|
||||
|
||||
The goal of this tutorial-example is to show how you can put efficient
|
||||
test support and fixture code in one place, allowing test modules and
|
||||
test functions to stay ignorant of importing, configuration or
|
||||
setup/teardown details.
|
||||
|
||||
The tutorial implements a simple ``RemoteInterpreter`` object that
|
||||
allows evaluation of python expressions. We are going to use
|
||||
the `execnet <http://codespeak.net/execnet>`_ package for the
|
||||
underlying cross-python bridge functionality.
|
||||
|
||||
|
||||
Step 1: Implementing a first test
|
||||
--------------------------------------------------------------
|
||||
|
||||
Let's write a simple test function using a not yet defined ``interp`` fixture::
|
||||
|
||||
# content of test_remoteinterpreter.py
|
||||
|
||||
def test_eval_simple(interp):
|
||||
assert interp.eval("6*9") == 42
|
||||
|
||||
The test function needs an argument named `interp` and therefore pytest will
|
||||
look for a :ref:`fixture function` that matches this name. We'll define it
|
||||
in a :ref:`local plugin <localplugin>` to make it available also to other
|
||||
test modules::
|
||||
|
||||
# content of conftest.py
|
||||
|
||||
from .remoteinterpreter import RemoteInterpreter
|
||||
|
||||
@pytest.fixture
|
||||
def interp(request):
|
||||
import execnet
|
||||
gw = execnet.makegateway()
|
||||
return RemoteInterpreter(gw)
|
||||
|
||||
To run the example we furthermore need to implement a RemoteInterpreter
|
||||
object which working with the injected execnet-gateway connection::
|
||||
|
||||
# content of remoteintepreter.py
|
||||
|
||||
class RemoteInterpreter:
|
||||
def __init__(self, gateway):
|
||||
self.gateway = gateway
|
||||
|
||||
def eval(self, expression):
|
||||
# execnet open a "gateway" to the remote process
|
||||
# which enables to remotely execute code and communicate
|
||||
# to and fro via channels
|
||||
ch = self.gateway.remote_exec("channel.send(%s)" % expression)
|
||||
return ch.receive()
|
||||
|
||||
That's it, we can now run the test::
|
||||
|
||||
$ py.test test_remoteinterpreter.py
|
||||
Traceback (most recent call last):
|
||||
File "/home/hpk/p/pytest/.tox/regen/bin/py.test", line 9, in <module>
|
||||
load_entry_point('pytest==2.3.0', 'console_scripts', 'py.test')()
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/core.py", line 473, in main
|
||||
config = _prepareconfig(args, plugins)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/core.py", line 463, in _prepareconfig
|
||||
pluginmanager=_pluginmanager, args=args)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/core.py", line 422, in __call__
|
||||
return self._docall(methods, kwargs)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/core.py", line 433, in _docall
|
||||
res = mc.execute()
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/core.py", line 351, in execute
|
||||
res = method(**kwargs)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/helpconfig.py", line 25, in pytest_cmdline_parse
|
||||
config = __multicall__.execute()
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/core.py", line 351, in execute
|
||||
res = method(**kwargs)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 10, in pytest_cmdline_parse
|
||||
config.parse(args)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 344, in parse
|
||||
self._preparse(args)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 322, in _preparse
|
||||
self._setinitialconftest(args)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 301, in _setinitialconftest
|
||||
self._conftest.setinitial(args)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 160, in setinitial
|
||||
self._try_load_conftest(anchor)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 166, in _try_load_conftest
|
||||
self._path2confmods[None] = self.getconftestmodules(anchor)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 190, in getconftestmodules
|
||||
clist[:0] = self.getconftestmodules(dp)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 189, in getconftestmodules
|
||||
clist.append(self.importconftest(conftestpath))
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 218, in importconftest
|
||||
self._conftestpath2mod[conftestpath] = mod = conftestpath.pyimport()
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/py/_path/local.py", line 532, in pyimport
|
||||
__import__(modname)
|
||||
File "/tmp/doc-exec-286/conftest.py", line 2, in <module>
|
||||
from .remoteinterpreter import RemoteInterpreter
|
||||
ValueError: Attempted relative import in non-package
|
||||
|
||||
.. _`tut-cmdlineoption`:
|
||||
|
||||
Step 2: Adding command line configuration
|
||||
-----------------------------------------------------------
|
||||
|
||||
To add a command line option we update the ``conftest.py`` of
|
||||
the previous example and add a command line option which
|
||||
is passed on to the MyApp object::
|
||||
|
||||
# content of ./conftest.py
|
||||
import pytest
|
||||
from myapp import MyApp
|
||||
|
||||
def pytest_addoption(parser): # pytest hook called during initialisation
|
||||
parser.addoption("--ssh", action="store", default=None,
|
||||
help="specify ssh host to run tests with")
|
||||
|
||||
@pytest.fixture
|
||||
def mysetup(request): # "mysetup" factory function
|
||||
return MySetup(request.config)
|
||||
|
||||
class MySetup:
|
||||
def __init__(self, config):
|
||||
self.config = config
|
||||
self.app = MyApp()
|
||||
|
||||
def getsshconnection(self):
|
||||
import execnet
|
||||
host = self.config.option.ssh
|
||||
if host is None:
|
||||
pytest.skip("specify ssh host with --ssh")
|
||||
return execnet.SshGateway(host)
|
||||
|
||||
|
||||
Now any test function can use the ``mysetup.getsshconnection()`` method
|
||||
like this::
|
||||
|
||||
# content of test_ssh.py
|
||||
class TestClass:
|
||||
def test_function(self, mysetup):
|
||||
conn = mysetup.getsshconnection()
|
||||
# work with conn
|
||||
|
||||
Running it yields::
|
||||
|
||||
$ py.test -q test_ssh.py -rs
|
||||
Traceback (most recent call last):
|
||||
File "/home/hpk/p/pytest/.tox/regen/bin/py.test", line 9, in <module>
|
||||
load_entry_point('pytest==2.3.0', 'console_scripts', 'py.test')()
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/core.py", line 473, in main
|
||||
config = _prepareconfig(args, plugins)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/core.py", line 463, in _prepareconfig
|
||||
pluginmanager=_pluginmanager, args=args)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/core.py", line 422, in __call__
|
||||
return self._docall(methods, kwargs)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/core.py", line 433, in _docall
|
||||
res = mc.execute()
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/core.py", line 351, in execute
|
||||
res = method(**kwargs)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/helpconfig.py", line 25, in pytest_cmdline_parse
|
||||
config = __multicall__.execute()
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/core.py", line 351, in execute
|
||||
res = method(**kwargs)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 10, in pytest_cmdline_parse
|
||||
config.parse(args)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 344, in parse
|
||||
self._preparse(args)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 322, in _preparse
|
||||
self._setinitialconftest(args)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 301, in _setinitialconftest
|
||||
self._conftest.setinitial(args)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 160, in setinitial
|
||||
self._try_load_conftest(anchor)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 166, in _try_load_conftest
|
||||
self._path2confmods[None] = self.getconftestmodules(anchor)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 190, in getconftestmodules
|
||||
clist[:0] = self.getconftestmodules(dp)
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 189, in getconftestmodules
|
||||
clist.append(self.importconftest(conftestpath))
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/config.py", line 218, in importconftest
|
||||
self._conftestpath2mod[conftestpath] = mod = conftestpath.pyimport()
|
||||
File "/home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/py/_path/local.py", line 532, in pyimport
|
||||
__import__(modname)
|
||||
File "/tmp/doc-exec-286/conftest.py", line 2, in <module>
|
||||
from myapp import MyApp
|
||||
ImportError: No module named myapp
|
||||
|
||||
If you specify a command line option like ``py.test --ssh=python.org`` the test will execute as expected.
|
||||
|
||||
Note that neither the ``TestClass`` nor the ``test_function`` need to
|
||||
know anything about how to setup the test state. It is handled separately
|
||||
in the ``conftest.py`` file. It is easy
|
||||
to extend the ``mysetup`` object for further needs in the test code - and for use by any other test functions in the files and directories below the ``conftest.py`` file.
|
||||
|
||||
18
doc/en/example/costlysetup/conftest.py
Normal file
18
doc/en/example/costlysetup/conftest.py
Normal file
@@ -0,0 +1,18 @@
|
||||
|
||||
import pytest
|
||||
|
||||
@pytest.fixture("session")
|
||||
def setup(request):
|
||||
setup = CostlySetup()
|
||||
request.addfinalizer(setup.finalize)
|
||||
return setup
|
||||
|
||||
class CostlySetup:
|
||||
def __init__(self):
|
||||
import time
|
||||
print ("performing costly setup")
|
||||
time.sleep(5)
|
||||
self.timecostly = 1
|
||||
|
||||
def finalize(self):
|
||||
del self.timecostly
|
||||
@@ -7,16 +7,21 @@ Usages and Examples
|
||||
Here is a (growing) list of examples. :ref:`Contact <contact>` us if you
|
||||
need more examples or have questions. Also take a look at the :ref:`comprehensive documentation <toc>` which contains many example snippets as well.
|
||||
|
||||
Also, `pytest on stackoverflow.com <http://stackoverflow.com/search?q=pytest>`_
|
||||
is a primary continously updated source of pytest questions and answers
|
||||
which often contain examples. New Questions will usually be seen
|
||||
by pytest users or developers.
|
||||
|
||||
.. note::
|
||||
|
||||
see :doc:`../getting-started` for basic introductionary examples
|
||||
see :doc:`../getting-started` for basic introductory examples
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
reportingdemo.txt
|
||||
simple.txt
|
||||
mysetup.txt
|
||||
parametrize.txt
|
||||
markers.txt
|
||||
pythoncollection.txt
|
||||
nonpython.txt
|
||||
375
doc/en/example/markers.txt
Normal file
375
doc/en/example/markers.txt
Normal file
@@ -0,0 +1,375 @@
|
||||
|
||||
.. _`mark examples`:
|
||||
|
||||
Working with custom markers
|
||||
=================================================
|
||||
|
||||
Here are some example using the :ref:`mark` mechanism.
|
||||
|
||||
Marking test functions and selecting them for a run
|
||||
----------------------------------------------------
|
||||
|
||||
You can "mark" a test function with custom metadata like this::
|
||||
|
||||
# content of test_server.py
|
||||
|
||||
import pytest
|
||||
@pytest.mark.webtest
|
||||
def test_send_http():
|
||||
pass # perform some webtest test for your app
|
||||
def test_something_quick():
|
||||
pass
|
||||
|
||||
.. versionadded:: 2.2
|
||||
|
||||
You can then restrict a test run to only run tests marked with ``webtest``::
|
||||
|
||||
$ py.test -v -m webtest
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_server.py:3: test_send_http PASSED
|
||||
|
||||
=================== 1 tests deselected by "-m 'webtest'" ===================
|
||||
================== 1 passed, 1 deselected in 0.00 seconds ==================
|
||||
|
||||
Or the inverse, running all tests except the webtest ones::
|
||||
|
||||
$ py.test -v -m "not webtest"
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_server.py:6: test_something_quick PASSED
|
||||
|
||||
================= 1 tests deselected by "-m 'not webtest'" =================
|
||||
================== 1 passed, 1 deselected in 0.00 seconds ==================
|
||||
|
||||
Registering markers
|
||||
-------------------------------------
|
||||
|
||||
.. versionadded:: 2.2
|
||||
|
||||
.. ini-syntax for custom markers:
|
||||
|
||||
Registering markers for your test suite is simple::
|
||||
|
||||
# content of pytest.ini
|
||||
[pytest]
|
||||
markers =
|
||||
webtest: mark a test as a webtest.
|
||||
|
||||
You can ask which markers exist for your test suite - the list includes our just defined ``webtest`` markers::
|
||||
|
||||
$ py.test --markers
|
||||
@pytest.mark.webtest: mark a test as a webtest.
|
||||
|
||||
@pytest.mark.skipif(*conditions): skip the given test function if evaluation of all conditions has a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform.
|
||||
|
||||
@pytest.mark.xfail(*conditions, reason=None, run=True): mark the the test function as an expected failure. Optionally specify a reason and run=False if you don't even want to execute the test function. Any positional condition strings will be evaluated (like with skipif) and if one is False the marker will not be applied.
|
||||
|
||||
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in multiple different argument value sets. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.
|
||||
|
||||
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures.
|
||||
|
||||
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
|
||||
|
||||
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
|
||||
|
||||
|
||||
For an example on how to add and work with markers from a plugin, see
|
||||
:ref:`adding a custom marker from a plugin`.
|
||||
|
||||
.. note::
|
||||
|
||||
It is recommended to explicitely register markers so that:
|
||||
|
||||
* there is one place in your test suite defining your markers
|
||||
|
||||
* asking for existing markers via ``py.test --markers`` gives good output
|
||||
|
||||
* typos in function markers are treated as an error if you use
|
||||
the ``--strict`` option. Later versions of py.test are probably
|
||||
going to treat non-registered markers as an error.
|
||||
|
||||
.. _`scoped-marking`:
|
||||
|
||||
Marking whole classes or modules
|
||||
----------------------------------------------------
|
||||
|
||||
If you are programming with Python 2.6 or later you may use ``pytest.mark``
|
||||
decorators with classes to apply markers to all of its test methods::
|
||||
|
||||
# content of test_mark_classlevel.py
|
||||
import pytest
|
||||
@pytest.mark.webtest
|
||||
class TestClass:
|
||||
def test_startup(self):
|
||||
pass
|
||||
def test_startup_and_more(self):
|
||||
pass
|
||||
|
||||
This is equivalent to directly applying the decorator to the
|
||||
two test functions.
|
||||
|
||||
To remain backward-compatible with Python 2.4 you can also set a
|
||||
``pytestmark`` attribute on a TestClass like this::
|
||||
|
||||
import pytest
|
||||
|
||||
class TestClass:
|
||||
pytestmark = pytest.mark.webtest
|
||||
|
||||
or if you need to use multiple markers you can use a list::
|
||||
|
||||
import pytest
|
||||
|
||||
class TestClass:
|
||||
pytestmark = [pytest.mark.webtest, pytest.mark.slowtest]
|
||||
|
||||
You can also set a module level marker::
|
||||
|
||||
import pytest
|
||||
pytestmark = pytest.mark.webtest
|
||||
|
||||
in which case it will be applied to all functions and
|
||||
methods defined in the module.
|
||||
|
||||
|
||||
Using ``-k TEXT`` to select tests
|
||||
----------------------------------------------------
|
||||
|
||||
You can use the ``-k`` command line option to only run tests with names matching
|
||||
the given argument::
|
||||
|
||||
$ py.test -k send_http # running with the above defined examples
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 4 items
|
||||
|
||||
test_server.py .
|
||||
|
||||
=================== 3 tests deselected by '-ksend_http' ====================
|
||||
================== 1 passed, 3 deselected in 0.01 seconds ==================
|
||||
|
||||
And you can also run all tests except the ones that match the keyword::
|
||||
|
||||
$ py.test -k-send_http
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 4 items
|
||||
|
||||
test_mark_classlevel.py ..
|
||||
test_server.py .
|
||||
|
||||
=================== 1 tests deselected by '-k-send_http' ===================
|
||||
================== 3 passed, 1 deselected in 0.01 seconds ==================
|
||||
|
||||
Or to only select the class::
|
||||
|
||||
$ py.test -kTestClass
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 4 items
|
||||
|
||||
test_mark_classlevel.py ..
|
||||
|
||||
=================== 2 tests deselected by '-kTestClass' ====================
|
||||
================== 2 passed, 2 deselected in 0.01 seconds ==================
|
||||
|
||||
.. _`adding a custom marker from a plugin`:
|
||||
|
||||
Custom marker and command line option to control test runs
|
||||
----------------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
Plugins can provide custom markers and implement specific behaviour
|
||||
based on it. This is a self-contained example which adds a command
|
||||
line option and a parametrized test function marker to run tests
|
||||
specifies via named environments::
|
||||
|
||||
# content of conftest.py
|
||||
|
||||
import pytest
|
||||
def pytest_addoption(parser):
|
||||
parser.addoption("-E", dest="env", action="store", metavar="NAME",
|
||||
help="only run tests matching the environment NAME.")
|
||||
|
||||
def pytest_configure(config):
|
||||
# register an additional marker
|
||||
config.addinivalue_line("markers",
|
||||
"env(name): mark test to run only on named environment")
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
envmarker = item.keywords.get("env", None)
|
||||
if envmarker is not None:
|
||||
envname = envmarker.args[0]
|
||||
if envname != item.config.option.env:
|
||||
pytest.skip("test requires env %r" % envname)
|
||||
|
||||
A test file using this local plugin::
|
||||
|
||||
# content of test_someenv.py
|
||||
|
||||
import pytest
|
||||
@pytest.mark.env("stage1")
|
||||
def test_basic_db_operation():
|
||||
pass
|
||||
|
||||
and an example invocations specifying a different environment than what
|
||||
the test needs::
|
||||
|
||||
$ py.test -E stage2
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 1 items
|
||||
|
||||
test_someenv.py s
|
||||
|
||||
======================== 1 skipped in 0.00 seconds =========================
|
||||
|
||||
and here is one that specifies exactly the environment needed::
|
||||
|
||||
$ py.test -E stage1
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 1 items
|
||||
|
||||
test_someenv.py .
|
||||
|
||||
========================= 1 passed in 0.01 seconds =========================
|
||||
|
||||
The ``--markers`` option always gives you a list of available markers::
|
||||
|
||||
$ py.test --markers
|
||||
@pytest.mark.env(name): mark test to run only on named environment
|
||||
|
||||
@pytest.mark.skipif(*conditions): skip the given test function if evaluation of all conditions has a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform.
|
||||
|
||||
@pytest.mark.xfail(*conditions, reason=None, run=True): mark the the test function as an expected failure. Optionally specify a reason and run=False if you don't even want to execute the test function. Any positional condition strings will be evaluated (like with skipif) and if one is False the marker will not be applied.
|
||||
|
||||
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in multiple different argument value sets. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.
|
||||
|
||||
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures.
|
||||
|
||||
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
|
||||
|
||||
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
|
||||
|
||||
|
||||
Reading markers which were set from multiple places
|
||||
----------------------------------------------------
|
||||
|
||||
.. versionadded: 2.2.2
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
If you are heavily using markers in your test suite you may encounter the case where a marker is applied several times to a test function. From plugin
|
||||
code you can read over all such settings. Example::
|
||||
|
||||
# content of test_mark_three_times.py
|
||||
import pytest
|
||||
pytestmark = pytest.mark.glob("module", x=1)
|
||||
|
||||
@pytest.mark.glob("class", x=2)
|
||||
class TestClass:
|
||||
@pytest.mark.glob("function", x=3)
|
||||
def test_something(self):
|
||||
pass
|
||||
|
||||
Here we have the marker "glob" applied three times to the same
|
||||
test function. From a conftest file we can read it like this::
|
||||
|
||||
# content of conftest.py
|
||||
import sys
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
g = item.keywords.get("glob", None)
|
||||
if g is not None:
|
||||
for info in g:
|
||||
print ("glob args=%s kwargs=%s" %(info.args, info.kwargs))
|
||||
sys.stdout.flush()
|
||||
|
||||
Let's run this without capturing output and see what we get::
|
||||
|
||||
$ py.test -q -s
|
||||
glob args=('function',) kwargs={'x': 3}
|
||||
glob args=('class',) kwargs={'x': 2}
|
||||
glob args=('module',) kwargs={'x': 1}
|
||||
.
|
||||
|
||||
marking platform specific tests with pytest
|
||||
--------------------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
Consider you have a test suite which marks tests for particular platforms,
|
||||
namely ``pytest.mark.osx``, ``pytest.mark.win32`` etc. and you
|
||||
also have tests that run on all platforms and have no specific
|
||||
marker. If you now want to have a way to only run the tests
|
||||
for your particular platform, you could use the following plugin::
|
||||
|
||||
# content of conftest.py
|
||||
#
|
||||
import sys
|
||||
import pytest
|
||||
|
||||
ALL = set("osx linux2 win32".split())
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
if isinstance(item, item.Function):
|
||||
plat = sys.platform
|
||||
if plat not in item.keywords:
|
||||
if ALL.intersection(item.keywords):
|
||||
pytest.skip("cannot run on platform %s" %(plat))
|
||||
|
||||
then tests will be skipped if they were specified for a different platform.
|
||||
Let's do a little test file to show how this looks like::
|
||||
|
||||
# content of test_plat.py
|
||||
|
||||
import pytest
|
||||
|
||||
@pytest.mark.osx
|
||||
def test_if_apple_is_evil():
|
||||
pass
|
||||
|
||||
@pytest.mark.linux2
|
||||
def test_if_linux_works():
|
||||
pass
|
||||
|
||||
@pytest.mark.win32
|
||||
def test_if_win32_crashes():
|
||||
pass
|
||||
|
||||
def test_runs_everywhere():
|
||||
pass
|
||||
|
||||
then you will see two test skipped and two executed tests as expected::
|
||||
|
||||
$ py.test -rs # this option reports skip reasons
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 4 items
|
||||
|
||||
test_plat.py s.s.
|
||||
========================= short test summary info ==========================
|
||||
SKIP [2] /tmp/doc-exec-288/conftest.py:12: cannot run on platform linux2
|
||||
|
||||
=================== 2 passed, 2 skipped in 0.01 seconds ====================
|
||||
|
||||
Note that if you specify a platform via the marker-command line option like this::
|
||||
|
||||
$ py.test -m linux2
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 4 items
|
||||
|
||||
test_plat.py .
|
||||
|
||||
=================== 3 tests deselected by "-m 'linux2'" ====================
|
||||
================== 1 passed, 3 deselected in 0.01 seconds ==================
|
||||
|
||||
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
|
||||
50
doc/en/example/multipython.py
Normal file
50
doc/en/example/multipython.py
Normal file
@@ -0,0 +1,50 @@
|
||||
"""
|
||||
module containing a parametrized tests testing cross-python
|
||||
serialization via the pickle module.
|
||||
"""
|
||||
import py, pytest
|
||||
|
||||
pythonlist = ['python2.4', 'python2.5', 'python2.6', 'python2.7', 'python2.8']
|
||||
@pytest.fixture(params=pythonlist)
|
||||
def python1(request, tmpdir):
|
||||
picklefile = tmpdir.join("data.pickle")
|
||||
return Python(request.param, picklefile)
|
||||
|
||||
@pytest.fixture(params=pythonlist)
|
||||
def python2(request, python1):
|
||||
return Python(request.param, python1.picklefile)
|
||||
|
||||
class Python:
|
||||
def __init__(self, version, picklefile):
|
||||
self.pythonpath = py.path.local.sysfind(version)
|
||||
if not self.pythonpath:
|
||||
py.test.skip("%r not found" %(version,))
|
||||
self.picklefile = picklefile
|
||||
def dumps(self, obj):
|
||||
dumpfile = self.picklefile.dirpath("dump.py")
|
||||
dumpfile.write(py.code.Source("""
|
||||
import pickle
|
||||
f = open(%r, 'wb')
|
||||
s = pickle.dump(%r, f)
|
||||
f.close()
|
||||
""" % (str(self.picklefile), obj)))
|
||||
py.process.cmdexec("%s %s" %(self.pythonpath, dumpfile))
|
||||
|
||||
def load_and_is_true(self, expression):
|
||||
loadfile = self.picklefile.dirpath("load.py")
|
||||
loadfile.write(py.code.Source("""
|
||||
import pickle
|
||||
f = open(%r, 'rb')
|
||||
obj = pickle.load(f)
|
||||
f.close()
|
||||
res = eval(%r)
|
||||
if not res:
|
||||
raise SystemExit(1)
|
||||
""" % (str(self.picklefile), expression)))
|
||||
print (loadfile)
|
||||
py.process.cmdexec("%s %s" %(self.pythonpath, loadfile))
|
||||
|
||||
@pytest.mark.parametrize("obj", [42, {}, {1:3},])
|
||||
def test_basic_objects(python1, python2, obj):
|
||||
python1.dumps(obj)
|
||||
python2.load_and_is_true("obj == %s" % obj)
|
||||
@@ -27,8 +27,8 @@ now execute the test specification::
|
||||
|
||||
nonpython $ py.test test_simple.yml
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
collecting ... collected 2 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 2 items
|
||||
|
||||
test_simple.yml .F
|
||||
|
||||
@@ -37,7 +37,7 @@ now execute the test specification::
|
||||
usecase execution failed
|
||||
spec failed: 'some': 'other'
|
||||
no further details known at this point.
|
||||
==================== 1 failed, 1 passed in 0.09 seconds ====================
|
||||
==================== 1 failed, 1 passed in 0.03 seconds ====================
|
||||
|
||||
You get one dot for the passing ``sub1: sub1`` check and one failure.
|
||||
Obviously in the above ``conftest.py`` you'll want to implement a more
|
||||
@@ -51,12 +51,12 @@ your own domain specific testing language this way.
|
||||
representation string of your choice. It
|
||||
will be reported as a (red) string.
|
||||
|
||||
``reportinfo()`` is used for representing the test location and is also consulted for
|
||||
reporting in ``verbose`` mode::
|
||||
``reportinfo()`` is used for representing the test location and is also
|
||||
consulted when reporting in ``verbose`` mode::
|
||||
|
||||
nonpython $ py.test -v
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3 -- /Users/hpk/venv/0/bin/python
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_simple.yml:1: usecase: ok PASSED
|
||||
@@ -67,17 +67,17 @@ reporting in ``verbose`` mode::
|
||||
usecase execution failed
|
||||
spec failed: 'some': 'other'
|
||||
no further details known at this point.
|
||||
==================== 1 failed, 1 passed in 0.09 seconds ====================
|
||||
==================== 1 failed, 1 passed in 0.03 seconds ====================
|
||||
|
||||
While developing your custom test collection and execution it's also
|
||||
interesting to just look at the collection tree::
|
||||
|
||||
nonpython $ py.test --collectonly
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
collecting ... collected 2 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 2 items
|
||||
<YamlFile 'test_simple.yml'>
|
||||
<YamlItem 'ok'>
|
||||
<YamlItem 'hello'>
|
||||
|
||||
============================= in 0.08 seconds =============================
|
||||
============================= in 0.03 seconds =============================
|
||||
280
doc/en/example/parametrize.txt
Normal file
280
doc/en/example/parametrize.txt
Normal file
@@ -0,0 +1,280 @@
|
||||
|
||||
.. _paramexamples:
|
||||
|
||||
Parametrizing tests
|
||||
=================================================
|
||||
|
||||
.. currentmodule:: _pytest.python
|
||||
|
||||
py.test allows to easily parametrize test functions.
|
||||
For basic docs, see :ref:`parametrize-basics`.
|
||||
|
||||
In the following we provide some examples using
|
||||
the builtin mechanisms.
|
||||
|
||||
Generating parameters combinations, depending on command line
|
||||
----------------------------------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
Let's say we want to execute a test with different computation
|
||||
parameters and the parameter range shall be determined by a command
|
||||
line argument. Let's first write a simple (do-nothing) computation test::
|
||||
|
||||
# content of test_compute.py
|
||||
|
||||
def test_compute(param1):
|
||||
assert param1 < 4
|
||||
|
||||
Now we add a test configuration like this::
|
||||
|
||||
# content of conftest.py
|
||||
|
||||
def pytest_addoption(parser):
|
||||
parser.addoption("--all", action="store_true",
|
||||
help="run all combinations")
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
if 'param1' in metafunc.fixturenames:
|
||||
if metafunc.config.option.all:
|
||||
end = 5
|
||||
else:
|
||||
end = 2
|
||||
metafunc.parametrize("param1", range(end))
|
||||
|
||||
This means that we only run 2 tests if we do not pass ``--all``::
|
||||
|
||||
$ py.test -q test_compute.py
|
||||
..
|
||||
|
||||
We run only two computations, so we see two dots.
|
||||
let's run the full monty::
|
||||
|
||||
$ py.test -q --all
|
||||
....F
|
||||
================================= FAILURES =================================
|
||||
_____________________________ test_compute[4] ______________________________
|
||||
|
||||
param1 = 4
|
||||
|
||||
def test_compute(param1):
|
||||
> assert param1 < 4
|
||||
E assert 4 < 4
|
||||
|
||||
test_compute.py:3: AssertionError
|
||||
|
||||
As expected when running the full range of ``param1`` values
|
||||
we'll get an error on the last one.
|
||||
|
||||
A quick port of "testscenarios"
|
||||
------------------------------------
|
||||
|
||||
.. _`test scenarios`: http://pypi.python.org/pypi/testscenarios/
|
||||
|
||||
Here is a quick port to run tests configured with `test scenarios`_,
|
||||
an add-on from Robert Collins for the standard unittest framework. We
|
||||
only have to work a bit to construct the correct arguments for pytest's
|
||||
:py:func:`Metafunc.parametrize`::
|
||||
|
||||
# content of test_scenarios.py
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
idlist = []
|
||||
argvalues = []
|
||||
for scenario in metafunc.cls.scenarios:
|
||||
idlist.append(scenario[0])
|
||||
items = scenario[1].items()
|
||||
argnames = [x[0] for x in items]
|
||||
argvalues.append(([x[1] for x in items]))
|
||||
metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class")
|
||||
|
||||
scenario1 = ('basic', {'attribute': 'value'})
|
||||
scenario2 = ('advanced', {'attribute': 'value2'})
|
||||
|
||||
class TestSampleWithScenarios:
|
||||
scenarios = [scenario1, scenario2]
|
||||
|
||||
def test_demo1(self, attribute):
|
||||
assert isinstance(attribute, str)
|
||||
|
||||
def test_demo2(self, attribute):
|
||||
assert isinstance(attribute, str)
|
||||
|
||||
this is a fully self-contained example which you can run with::
|
||||
|
||||
$ py.test test_scenarios.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 4 items
|
||||
|
||||
test_scenarios.py ....
|
||||
|
||||
========================= 4 passed in 0.01 seconds =========================
|
||||
|
||||
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
|
||||
|
||||
|
||||
$ py.test --collectonly test_scenarios.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 4 items
|
||||
<Module 'test_scenarios.py'>
|
||||
<Class 'TestSampleWithScenarios'>
|
||||
<Instance '()'>
|
||||
<Function 'test_demo1[basic]'>
|
||||
<Function 'test_demo2[basic]'>
|
||||
<Function 'test_demo1[advanced]'>
|
||||
<Function 'test_demo2[advanced]'>
|
||||
|
||||
============================= in 0.01 seconds =============================
|
||||
|
||||
Note that we told ``metafunc.parametrize()`` that your scenario values
|
||||
should be considered class-scoped. With pytest-2.3 this leads to a
|
||||
resource-based ordering.
|
||||
|
||||
Deferring the setup of parametrized resources
|
||||
---------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
The parametrization of test functions happens at collection
|
||||
time. It is a good idea to setup expensive resources like DB
|
||||
connections or subprocess only when the actual test is run.
|
||||
Here is a simple example how you can achieve that, first
|
||||
the actual test requiring a ``db`` object::
|
||||
|
||||
# content of test_backends.py
|
||||
|
||||
import pytest
|
||||
def test_db_initialized(db):
|
||||
# a dummy test
|
||||
if db.__class__.__name__ == "DB2":
|
||||
pytest.fail("deliberately failing for demo purposes")
|
||||
|
||||
We can now add a test configuration that generates two invocations of
|
||||
the ``test_db_initialized`` function and also implements a factory that
|
||||
creates a database object for the actual test invocations::
|
||||
|
||||
# content of conftest.py
|
||||
import pytest
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
if 'db' in metafunc.fixturenames:
|
||||
metafunc.parametrize("db", ['d1', 'd2'], indirect=True)
|
||||
|
||||
class DB1:
|
||||
"one database object"
|
||||
class DB2:
|
||||
"alternative database object"
|
||||
|
||||
@pytest.fixture
|
||||
def db(request):
|
||||
if request.param == "d1":
|
||||
return DB1()
|
||||
elif request.param == "d2":
|
||||
return DB2()
|
||||
else:
|
||||
raise ValueError("invalid internal test config")
|
||||
|
||||
Let's first see how it looks like at collection time::
|
||||
|
||||
$ py.test test_backends.py --collectonly
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 2 items
|
||||
<Module 'test_backends.py'>
|
||||
<Function 'test_db_initialized[d1]'>
|
||||
<Function 'test_db_initialized[d2]'>
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
|
||||
And then when we run the test::
|
||||
|
||||
$ py.test -q test_backends.py
|
||||
.F
|
||||
================================= FAILURES =================================
|
||||
_________________________ test_db_initialized[d2] __________________________
|
||||
|
||||
db = <conftest.DB2 instance at 0x2928878>
|
||||
|
||||
def test_db_initialized(db):
|
||||
# a dummy test
|
||||
if db.__class__.__name__ == "DB2":
|
||||
> pytest.fail("deliberately failing for demo purposes")
|
||||
E Failed: deliberately failing for demo purposes
|
||||
|
||||
test_backends.py:6: Failed
|
||||
|
||||
The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
Parametrizing test methods through per-class configuration
|
||||
--------------------------------------------------------------
|
||||
|
||||
.. _`unittest parameterizer`: http://code.google.com/p/unittest-ext/source/browse/trunk/params.py
|
||||
|
||||
|
||||
Here is an example ``pytest_generate_function`` function implementing a
|
||||
parametrization scheme similar to Michael Foord's `unittest
|
||||
parameterizer`_ but in a lot less code::
|
||||
|
||||
# content of ./test_parametrize.py
|
||||
import pytest
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
# called once per each test function
|
||||
funcarglist = metafunc.cls.params[metafunc.function.__name__]
|
||||
argnames = list(funcarglist[0])
|
||||
metafunc.parametrize(argnames, [[funcargs[name] for name in argnames]
|
||||
for funcargs in funcarglist])
|
||||
|
||||
class TestClass:
|
||||
# a map specifying multiple argument sets for a test method
|
||||
params = {
|
||||
'test_equals': [dict(a=1, b=2), dict(a=3, b=3), ],
|
||||
'test_zerodivision': [dict(a=1, b=0), ],
|
||||
}
|
||||
|
||||
def test_equals(self, a, b):
|
||||
assert a == b
|
||||
|
||||
def test_zerodivision(self, a, b):
|
||||
pytest.raises(ZeroDivisionError, "a/b")
|
||||
|
||||
Our test generator looks up a class-level definition which specifies which
|
||||
argument sets to use for each test function. Let's run it::
|
||||
|
||||
$ py.test -q
|
||||
F..
|
||||
================================= FAILURES =================================
|
||||
________________________ TestClass.test_equals[1-2] ________________________
|
||||
|
||||
self = <test_parametrize.TestClass instance at 0x18dd4d0>, a = 1, b = 2
|
||||
|
||||
def test_equals(self, a, b):
|
||||
> assert a == b
|
||||
E assert 1 == 2
|
||||
|
||||
test_parametrize.py:18: AssertionError
|
||||
|
||||
Indirect parametrization with multiple fixtures
|
||||
--------------------------------------------------------------
|
||||
|
||||
Here is a stripped down real-life example of using parametrized
|
||||
testing for testing serialization of objects between different python
|
||||
interpreters. We define a ``test_basic_objects`` function which
|
||||
is to be run with different sets of arguments for its three arguments:
|
||||
|
||||
* ``python1``: first python interpreter, run to pickle-dump an object to a file
|
||||
* ``python2``: second interpreter, run to pickle-load an object from a file
|
||||
* ``obj``: object to be dumped/loaded
|
||||
|
||||
.. literalinclude:: multipython.py
|
||||
|
||||
Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize)::
|
||||
|
||||
. $ py.test -rs -q multipython.py
|
||||
............sss............sss............sss............ssssssssssssssssss
|
||||
========================= short test summary info ==========================
|
||||
SKIP [27] /home/hpk/p/pytest/doc/en/example/multipython.py:21: 'python2.8' not found
|
||||
16
doc/en/example/py2py3/conftest.py
Normal file
16
doc/en/example/py2py3/conftest.py
Normal file
@@ -0,0 +1,16 @@
|
||||
import sys
|
||||
import pytest
|
||||
|
||||
py3 = sys.version_info[0] >= 3
|
||||
|
||||
class DummyCollector(pytest.collect.File):
|
||||
def collect(self):
|
||||
return []
|
||||
|
||||
def pytest_pycollect_makemodule(path, parent):
|
||||
bn = path.basename
|
||||
if "py3" in bn and not py3 or ("py2" in bn and py3):
|
||||
return DummyCollector(path, parent=parent)
|
||||
|
||||
|
||||
|
||||
7
doc/en/example/py2py3/test_py2.py
Normal file
7
doc/en/example/py2py3/test_py2.py
Normal file
@@ -0,0 +1,7 @@
|
||||
|
||||
def test_exception_syntax():
|
||||
try:
|
||||
0/0
|
||||
except ZeroDivisionError, e:
|
||||
pass
|
||||
|
||||
7
doc/en/example/py2py3/test_py3.py
Normal file
7
doc/en/example/py2py3/test_py3.py
Normal file
@@ -0,0 +1,7 @@
|
||||
|
||||
def test_exception_syntax():
|
||||
try:
|
||||
0/0
|
||||
except ZeroDivisionError as e:
|
||||
pass
|
||||
|
||||
@@ -43,15 +43,15 @@ then the test collection looks like this::
|
||||
|
||||
$ py.test --collectonly
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
collecting ... collected 2 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 2 items
|
||||
<Module 'check_myapp.py'>
|
||||
<Class 'CheckMyApp'>
|
||||
<Instance '()'>
|
||||
<Function 'check_simple'>
|
||||
<Function 'check_complex'>
|
||||
|
||||
============================= in 0.02 seconds =============================
|
||||
============================= in 0.00 seconds =============================
|
||||
|
||||
Interpreting cmdline arguments as Python packages
|
||||
-----------------------------------------------------
|
||||
@@ -82,8 +82,8 @@ You can always peek at the collection tree without running tests like this::
|
||||
|
||||
. $ py.test --collectonly pythoncollection.py
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
collecting ... collected 3 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 3 items
|
||||
<Module 'pythoncollection.py'>
|
||||
<Function 'test_function'>
|
||||
<Class 'TestClass'>
|
||||
@@ -91,4 +91,56 @@ You can always peek at the collection tree without running tests like this::
|
||||
<Function 'test_method'>
|
||||
<Function 'test_anothermethod'>
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
|
||||
customizing test collection to find all .py files
|
||||
---------------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
You can easily instruct py.test to discover tests from every python file::
|
||||
|
||||
|
||||
# content of pytest.ini
|
||||
[pytest]
|
||||
python_files = *.py
|
||||
|
||||
However, many projects will have a ``setup.py`` which they don't want to be imported. Moreover, there may files only importable by a specific python version.
|
||||
For such cases you can dynamically define files to be ignored by listing
|
||||
them in a ``conftest.py`` file::
|
||||
|
||||
# content of conftest.py
|
||||
import sys
|
||||
|
||||
collect_ignore = ["setup.py"]
|
||||
if sys.version_info[0] > 2:
|
||||
collect_ignore.append("pkg/module_py2.py")
|
||||
|
||||
And then if you have a module file like this::
|
||||
|
||||
# content of pkg/module_py2.py
|
||||
def test_only_on_python2():
|
||||
try:
|
||||
assert 0
|
||||
except Exception, e:
|
||||
pass
|
||||
|
||||
and a setup.py dummy file like this::
|
||||
|
||||
# content of setup.py
|
||||
0/0 # will raise exeption if imported
|
||||
|
||||
then a pytest run on python2 will find the one test when run with a python2
|
||||
interpreters and will leave out the setup.py file::
|
||||
|
||||
$ py.test --collectonly
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 1 items
|
||||
<Module 'pkg/module_py2.py'>
|
||||
<Function 'test_only_on_python2'>
|
||||
|
||||
============================= in 0.01 seconds =============================
|
||||
|
||||
If you run with a Python3 interpreter the moduled added through the conftest.py file will not be considered for test collection.
|
||||
|
||||
@@ -13,8 +13,8 @@ get on the terminal - we are working on that):
|
||||
|
||||
assertion $ py.test failure_demo.py
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
collecting ... collected 39 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 39 items
|
||||
|
||||
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
|
||||
|
||||
@@ -30,7 +30,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:15: AssertionError
|
||||
_________________________ TestFailing.test_simple __________________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0x10134bc50>
|
||||
self = <failure_demo.TestFailing object at 0x2727750>
|
||||
|
||||
def test_simple(self):
|
||||
def f():
|
||||
@@ -40,13 +40,13 @@ get on the terminal - we are working on that):
|
||||
|
||||
> assert f() == g()
|
||||
E assert 42 == 43
|
||||
E + where 42 = <function f at 0x101322320>()
|
||||
E + and 43 = <function g at 0x101322398>()
|
||||
E + where 42 = <function f at 0x26cf488>()
|
||||
E + and 43 = <function g at 0x26cf500>()
|
||||
|
||||
failure_demo.py:28: AssertionError
|
||||
____________________ TestFailing.test_simple_multiline _____________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0x10134b150>
|
||||
self = <failure_demo.TestFailing object at 0x27277d0>
|
||||
|
||||
def test_simple_multiline(self):
|
||||
otherfunc_multi(
|
||||
@@ -66,19 +66,19 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:11: AssertionError
|
||||
___________________________ TestFailing.test_not ___________________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0x10134b710>
|
||||
self = <failure_demo.TestFailing object at 0x2727890>
|
||||
|
||||
def test_not(self):
|
||||
def f():
|
||||
return 42
|
||||
> assert not f()
|
||||
E assert not 42
|
||||
E + where 42 = <function f at 0x101322398>()
|
||||
E + where 42 = <function f at 0x26cf7d0>()
|
||||
|
||||
failure_demo.py:38: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_text _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10134be10>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2727c10>
|
||||
|
||||
def test_eq_text(self):
|
||||
> assert 'spam' == 'eggs'
|
||||
@@ -89,7 +89,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:42: AssertionError
|
||||
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101347110>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26d1110>
|
||||
|
||||
def test_eq_similar_text(self):
|
||||
> assert 'foo 1 bar' == 'foo 2 bar'
|
||||
@@ -102,7 +102,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:45: AssertionError
|
||||
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343d50>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26d1190>
|
||||
|
||||
def test_eq_multiline_text(self):
|
||||
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
|
||||
@@ -115,7 +115,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:48: AssertionError
|
||||
______________ TestSpecialisedExplanations.test_eq_long_text _______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10134b210>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26d1090>
|
||||
|
||||
def test_eq_long_text(self):
|
||||
a = '1'*100 + 'a' + '2'*100
|
||||
@@ -132,7 +132,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:53: AssertionError
|
||||
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343d90>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26c7ed0>
|
||||
|
||||
def test_eq_long_text_multiline(self):
|
||||
a = '1\n'*100 + 'a' + '2\n'*100
|
||||
@@ -156,7 +156,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:58: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_list _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343dd0>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26c7190>
|
||||
|
||||
def test_eq_list(self):
|
||||
> assert [0, 1, 2] == [0, 1, 3]
|
||||
@@ -166,7 +166,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:61: AssertionError
|
||||
______________ TestSpecialisedExplanations.test_eq_list_long _______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343b90>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26c7410>
|
||||
|
||||
def test_eq_list_long(self):
|
||||
a = [0]*100 + [1] + [3]*100
|
||||
@@ -178,7 +178,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:66: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_dict _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343210>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26c78d0>
|
||||
|
||||
def test_eq_dict(self):
|
||||
> assert {'a': 0, 'b': 1} == {'a': 0, 'b': 2}
|
||||
@@ -191,7 +191,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:69: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_set __________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343990>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26c7690>
|
||||
|
||||
def test_eq_set(self):
|
||||
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
|
||||
@@ -207,7 +207,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:72: AssertionError
|
||||
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343590>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26c7510>
|
||||
|
||||
def test_eq_longer_list(self):
|
||||
> assert [1,2] == [1,2,3]
|
||||
@@ -217,7 +217,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:75: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_in_list _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343e50>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26c3190>
|
||||
|
||||
def test_in_list(self):
|
||||
> assert 1 in [0, 2, 3, 4, 5]
|
||||
@@ -226,7 +226,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:78: AssertionError
|
||||
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10133bb10>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26c3d10>
|
||||
|
||||
def test_not_in_text_multiline(self):
|
||||
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
|
||||
@@ -244,7 +244,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:82: AssertionError
|
||||
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10133b990>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26c3d50>
|
||||
|
||||
def test_not_in_text_single(self):
|
||||
text = 'single foo line'
|
||||
@@ -257,7 +257,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:86: AssertionError
|
||||
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x10133bbd0>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26c3790>
|
||||
|
||||
def test_not_in_text_single_long(self):
|
||||
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
|
||||
@@ -270,7 +270,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:90: AssertionError
|
||||
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x101343510>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26c3150>
|
||||
|
||||
def test_not_in_text_single_long_term(self):
|
||||
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
|
||||
@@ -289,7 +289,7 @@ get on the terminal - we are working on that):
|
||||
i = Foo()
|
||||
> assert i.b == 2
|
||||
E assert 1 == 2
|
||||
E + where 1 = <failure_demo.Foo object at 0x10133b390>.b
|
||||
E + where 1 = <failure_demo.Foo object at 0x26c3850>.b
|
||||
|
||||
failure_demo.py:101: AssertionError
|
||||
_________________________ test_attribute_instance __________________________
|
||||
@@ -299,8 +299,8 @@ get on the terminal - we are working on that):
|
||||
b = 1
|
||||
> assert Foo().b == 2
|
||||
E assert 1 == 2
|
||||
E + where 1 = <failure_demo.Foo object at 0x10133b250>.b
|
||||
E + where <failure_demo.Foo object at 0x10133b250> = <class 'failure_demo.Foo'>()
|
||||
E + where 1 = <failure_demo.Foo object at 0x26c39d0>.b
|
||||
E + where <failure_demo.Foo object at 0x26c39d0> = <class 'failure_demo.Foo'>()
|
||||
|
||||
failure_demo.py:107: AssertionError
|
||||
__________________________ test_attribute_failure __________________________
|
||||
@@ -316,7 +316,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:116:
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
|
||||
self = <failure_demo.Foo object at 0x10133bd50>
|
||||
self = <failure_demo.Foo object at 0x2890810>
|
||||
|
||||
def _get_b(self):
|
||||
> raise Exception('Failed to get attrib')
|
||||
@@ -332,15 +332,15 @@ get on the terminal - we are working on that):
|
||||
b = 2
|
||||
> assert Foo().b == Bar().b
|
||||
E assert 1 == 2
|
||||
E + where 1 = <failure_demo.Foo object at 0x10133bb50>.b
|
||||
E + where <failure_demo.Foo object at 0x10133bb50> = <class 'failure_demo.Foo'>()
|
||||
E + and 2 = <failure_demo.Bar object at 0x10133b1d0>.b
|
||||
E + where <failure_demo.Bar object at 0x10133b1d0> = <class 'failure_demo.Bar'>()
|
||||
E + where 1 = <failure_demo.Foo object at 0x26c35d0>.b
|
||||
E + where <failure_demo.Foo object at 0x26c35d0> = <class 'failure_demo.Foo'>()
|
||||
E + and 2 = <failure_demo.Bar object at 0x26c3c50>.b
|
||||
E + where <failure_demo.Bar object at 0x26c3c50> = <class 'failure_demo.Bar'>()
|
||||
|
||||
failure_demo.py:124: AssertionError
|
||||
__________________________ TestRaises.test_raises __________________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x1013697e8>
|
||||
self = <failure_demo.TestRaises instance at 0x2738c68>
|
||||
|
||||
def test_raises(self):
|
||||
s = 'qwe'
|
||||
@@ -352,10 +352,10 @@ get on the terminal - we are working on that):
|
||||
> int(s)
|
||||
E ValueError: invalid literal for int() with base 10: 'qwe'
|
||||
|
||||
<0-codegen /Users/hpk/p/pytest/_pytest/python.py:833>:1: ValueError
|
||||
<0-codegen /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:838>:1: ValueError
|
||||
______________________ TestRaises.test_raises_doesnt _______________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x101372a70>
|
||||
self = <failure_demo.TestRaises instance at 0x27413b0>
|
||||
|
||||
def test_raises_doesnt(self):
|
||||
> raises(IOError, "int('3')")
|
||||
@@ -364,7 +364,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:136: Failed
|
||||
__________________________ TestRaises.test_raise ___________________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x10136a908>
|
||||
self = <failure_demo.TestRaises instance at 0x273a4d0>
|
||||
|
||||
def test_raise(self):
|
||||
> raise ValueError("demo error")
|
||||
@@ -373,7 +373,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:139: ValueError
|
||||
________________________ TestRaises.test_tupleerror ________________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x10136c710>
|
||||
self = <failure_demo.TestRaises instance at 0x272d248>
|
||||
|
||||
def test_tupleerror(self):
|
||||
> a,b = [1]
|
||||
@@ -382,7 +382,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:142: ValueError
|
||||
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x101365488>
|
||||
self = <failure_demo.TestRaises instance at 0x272df80>
|
||||
|
||||
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
|
||||
l = [1,2,3]
|
||||
@@ -395,11 +395,11 @@ get on the terminal - we are working on that):
|
||||
l is [1, 2, 3]
|
||||
________________________ TestRaises.test_some_error ________________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x101367248>
|
||||
self = <failure_demo.TestRaises instance at 0x272ed88>
|
||||
|
||||
def test_some_error(self):
|
||||
> if namenotexi:
|
||||
E NameError: global name 'namenotexi' is not defined
|
||||
E NameError: global name 'namenotexi' is not defined
|
||||
|
||||
failure_demo.py:150: NameError
|
||||
____________________ test_dynamic_compile_shows_nicely _____________________
|
||||
@@ -420,10 +420,10 @@ get on the terminal - we are working on that):
|
||||
> assert 1 == 0
|
||||
E assert 1 == 0
|
||||
|
||||
<2-codegen 'abc-123' /Users/hpk/p/pytest/doc/example/assertion/failure_demo.py:162>:2: AssertionError
|
||||
<2-codegen 'abc-123' /home/hpk/p/pytest/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError
|
||||
____________________ TestMoreErrors.test_complex_error _____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101380f38>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x2741758>
|
||||
|
||||
def test_complex_error(self):
|
||||
def f():
|
||||
@@ -452,7 +452,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:5: AssertionError
|
||||
___________________ TestMoreErrors.test_z1_unpack_error ____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101367f80>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x2728d88>
|
||||
|
||||
def test_z1_unpack_error(self):
|
||||
l = []
|
||||
@@ -462,7 +462,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:179: ValueError
|
||||
____________________ TestMoreErrors.test_z2_type_error _____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101363dd0>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x272ab90>
|
||||
|
||||
def test_z2_type_error(self):
|
||||
l = 3
|
||||
@@ -472,19 +472,19 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:183: TypeError
|
||||
______________________ TestMoreErrors.test_startswith ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101364bd8>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x272b998>
|
||||
|
||||
def test_startswith(self):
|
||||
s = "123"
|
||||
g = "456"
|
||||
> assert s.startswith(g)
|
||||
E assert <built-in method startswith of str object at 0x1013524e0>('456')
|
||||
E + where <built-in method startswith of str object at 0x1013524e0> = '123'.startswith
|
||||
E assert <built-in method startswith of str object at 0x2734ad0>('456')
|
||||
E + where <built-in method startswith of str object at 0x2734ad0> = '123'.startswith
|
||||
|
||||
failure_demo.py:188: AssertionError
|
||||
__________________ TestMoreErrors.test_startswith_nested ___________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101363fc8>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x272a0e0>
|
||||
|
||||
def test_startswith_nested(self):
|
||||
def f():
|
||||
@@ -492,15 +492,15 @@ get on the terminal - we are working on that):
|
||||
def g():
|
||||
return "456"
|
||||
> assert f().startswith(g())
|
||||
E assert <built-in method startswith of str object at 0x1013524e0>('456')
|
||||
E + where <built-in method startswith of str object at 0x1013524e0> = '123'.startswith
|
||||
E + where '123' = <function f at 0x10132c500>()
|
||||
E + and '456' = <function g at 0x10132c8c0>()
|
||||
E assert <built-in method startswith of str object at 0x2734ad0>('456')
|
||||
E + where <built-in method startswith of str object at 0x2734ad0> = '123'.startswith
|
||||
E + where '123' = <function f at 0x27549b0>()
|
||||
E + and '456' = <function g at 0x27488c0>()
|
||||
|
||||
failure_demo.py:195: AssertionError
|
||||
_____________________ TestMoreErrors.test_global_func ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1013696c8>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x273b7a0>
|
||||
|
||||
def test_global_func(self):
|
||||
> assert isinstance(globf(42), float)
|
||||
@@ -510,18 +510,18 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:198: AssertionError
|
||||
_______________________ TestMoreErrors.test_instance _______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1013671b8>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x272dd88>
|
||||
|
||||
def test_instance(self):
|
||||
self.x = 6*7
|
||||
> assert self.x != 42
|
||||
E assert 42 != 42
|
||||
E + where 42 = <failure_demo.TestMoreErrors instance at 0x1013671b8>.x
|
||||
E + where 42 = <failure_demo.TestMoreErrors instance at 0x272dd88>.x
|
||||
|
||||
failure_demo.py:202: AssertionError
|
||||
_______________________ TestMoreErrors.test_compare ________________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x101366560>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x2729560>
|
||||
|
||||
def test_compare(self):
|
||||
> assert globf(10) < 5
|
||||
@@ -531,7 +531,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:205: AssertionError
|
||||
_____________________ TestMoreErrors.test_try_finally ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1013613b0>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x271f3b0>
|
||||
|
||||
def test_try_finally(self):
|
||||
x = 1
|
||||
@@ -540,4 +540,4 @@ get on the terminal - we are working on that):
|
||||
E assert 1 == 0
|
||||
|
||||
failure_demo.py:210: AssertionError
|
||||
======================== 39 failed in 0.39 seconds =========================
|
||||
======================== 39 failed in 0.18 seconds =========================
|
||||
@@ -22,20 +22,22 @@ Here is a basic pattern how to achieve this::
|
||||
|
||||
|
||||
For this to work we need to add a command line option and
|
||||
provide the ``cmdopt`` through a :ref:`function argument <funcarg>` factory::
|
||||
provide the ``cmdopt`` through a :ref:`fixture function <fixture function>`::
|
||||
|
||||
# content of conftest.py
|
||||
import pytest
|
||||
|
||||
def pytest_addoption(parser):
|
||||
parser.addoption("--cmdopt", action="store", default="type1",
|
||||
help="my option: type1 or type2")
|
||||
|
||||
def pytest_funcarg__cmdopt(request):
|
||||
@pytest.fixture
|
||||
def cmdopt(request):
|
||||
return request.config.option.cmdopt
|
||||
|
||||
Let's run this without supplying our new command line option::
|
||||
Let's run this without supplying our new option::
|
||||
|
||||
$ py.test -q test_sample.py
|
||||
collecting ... collected 1 items
|
||||
F
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_answer ________________________________
|
||||
@@ -53,12 +55,10 @@ Let's run this without supplying our new command line option::
|
||||
test_sample.py:6: AssertionError
|
||||
----------------------------- Captured stdout ------------------------------
|
||||
first
|
||||
1 failed in 0.02 seconds
|
||||
|
||||
And now with supplying a command line option::
|
||||
|
||||
$ py.test -q --cmdopt=type2
|
||||
collecting ... collected 1 items
|
||||
F
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_answer ________________________________
|
||||
@@ -76,14 +76,11 @@ And now with supplying a command line option::
|
||||
test_sample.py:6: AssertionError
|
||||
----------------------------- Captured stdout ------------------------------
|
||||
second
|
||||
1 failed in 0.02 seconds
|
||||
|
||||
Ok, this completes the basic pattern. However, one often rather
|
||||
wants to process command line options outside of the test and
|
||||
rather pass in different or more complex objects. See the
|
||||
next example or refer to :ref:`mysetup` for more information
|
||||
on real-life examples.
|
||||
|
||||
You can see that the command line option arrived in our test. This
|
||||
completes the basic pattern. However, one often rather wants to process
|
||||
command line options outside of the test and rather pass in different or
|
||||
more complex objects.
|
||||
|
||||
Dynamically adding command line options
|
||||
--------------------------------------------------------------
|
||||
@@ -109,13 +106,10 @@ directory with the above conftest.py::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
gw0 I
|
||||
gw0 [0]
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 0 items
|
||||
|
||||
scheduling tests via LoadScheduling
|
||||
|
||||
============================= in 0.48 seconds =============================
|
||||
============================= in 0.00 seconds =============================
|
||||
|
||||
.. _`excontrolskip`:
|
||||
|
||||
@@ -156,25 +150,25 @@ and when running it will see a skipped "slow" test::
|
||||
|
||||
$ py.test -rs # "-rs" means report details on the little 's'
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
collecting ... collected 2 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 2 items
|
||||
|
||||
test_module.py .s
|
||||
========================= short test summary info ==========================
|
||||
SKIP [1] /Users/hpk/tmp/doc-exec-172/conftest.py:9: need --runslow option to run
|
||||
SKIP [1] /tmp/doc-exec-293/conftest.py:9: need --runslow option to run
|
||||
|
||||
=================== 1 passed, 1 skipped in 0.02 seconds ====================
|
||||
=================== 1 passed, 1 skipped in 0.01 seconds ====================
|
||||
|
||||
Or run it including the ``slow`` marked test::
|
||||
|
||||
$ py.test --runslow
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
collecting ... collected 2 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 2 items
|
||||
|
||||
test_module.py ..
|
||||
|
||||
========================= 2 passed in 0.02 seconds =========================
|
||||
========================= 2 passed in 0.01 seconds =========================
|
||||
|
||||
Writing well integrated assertion helpers
|
||||
--------------------------------------------------
|
||||
@@ -203,7 +197,6 @@ unless the ``--fulltrace`` command line option is specified.
|
||||
Let's run our little function::
|
||||
|
||||
$ py.test -q test_checkconfig.py
|
||||
collecting ... collected 1 items
|
||||
F
|
||||
================================= FAILURES =================================
|
||||
______________________________ test_something ______________________________
|
||||
@@ -213,7 +206,6 @@ Let's run our little function::
|
||||
E Failed: not configured: 42
|
||||
|
||||
test_checkconfig.py:8: Failed
|
||||
1 failed in 0.02 seconds
|
||||
|
||||
Detect if running from within a py.test run
|
||||
--------------------------------------------------------------
|
||||
@@ -240,9 +232,9 @@ and then check for the ``sys._called_from_test`` flag::
|
||||
# called from within a test run
|
||||
else:
|
||||
# called "normally"
|
||||
|
||||
|
||||
accordingly in your application. It's also a good idea
|
||||
to rather use your own application module rather than ``sys``
|
||||
to use your own application module rather than ``sys``
|
||||
for handling flag.
|
||||
|
||||
Adding info to test report header
|
||||
@@ -261,9 +253,9 @@ which will add the string to the test header accordingly::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
project deps: mylib-1.1
|
||||
collecting ... collected 0 items
|
||||
collected 0 items
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
|
||||
@@ -275,7 +267,7 @@ information on e.g. the value of ``config.option.verbose`` so that
|
||||
you present more information appropriately::
|
||||
|
||||
# content of conftest.py
|
||||
|
||||
|
||||
def pytest_report_header(config):
|
||||
if config.option.verbose > 0:
|
||||
return ["info1: did you know that ...", "did you?"]
|
||||
@@ -284,7 +276,7 @@ which will add info only when run with "--v"::
|
||||
|
||||
$ py.test -v
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3 -- /Users/hpk/venv/0/bin/python
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
info1: did you know that ...
|
||||
did you?
|
||||
collecting ... collected 0 items
|
||||
@@ -295,7 +287,119 @@ and nothing when run plainly::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform darwin -- Python 2.7.1 -- pytest-2.1.3
|
||||
collecting ... collected 0 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 0 items
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
|
||||
profiling test duration
|
||||
--------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
.. versionadded: 2.2
|
||||
|
||||
If you have a slow running large test suite you might want to find
|
||||
out which tests are the slowest. Let's make an artifical test suite::
|
||||
|
||||
# content of test_some_are_slow.py
|
||||
|
||||
import time
|
||||
|
||||
def test_funcfast():
|
||||
pass
|
||||
|
||||
def test_funcslow1():
|
||||
time.sleep(0.1)
|
||||
|
||||
def test_funcslow2():
|
||||
time.sleep(0.2)
|
||||
|
||||
Now we can profile which test functions execute the slowest::
|
||||
|
||||
$ py.test --durations=3
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 3 items
|
||||
|
||||
test_some_are_slow.py ...
|
||||
|
||||
========================= slowest 3 test durations =========================
|
||||
0.20s call test_some_are_slow.py::test_funcslow2
|
||||
0.10s call test_some_are_slow.py::test_funcslow1
|
||||
0.00s setup test_some_are_slow.py::test_funcfast
|
||||
========================= 3 passed in 0.31 seconds =========================
|
||||
|
||||
incremental testing - test steps
|
||||
---------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
Sometimes you may have a testing situation which consists of a series
|
||||
of test steps. If one step fails it makes no sense to execute further
|
||||
steps as they are all expected to fail anyway and their tracebacks
|
||||
add no insight. Here is a simple ``conftest.py`` file which introduces
|
||||
an ``incremental`` marker which is to be used on classes::
|
||||
|
||||
# content of conftest.py
|
||||
|
||||
import pytest
|
||||
|
||||
def pytest_runtest_makereport(item, call):
|
||||
if "incremental" in item.keywords:
|
||||
if call.excinfo is not None:
|
||||
parent = item.parent
|
||||
parent._previousfailed = item
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
if "incremental" in item.keywords:
|
||||
previousfailed = getattr(item.parent, "_previousfailed", None)
|
||||
if previousfailed is not None:
|
||||
pytest.xfail("previous test failed (%s)" %previousfailed.name)
|
||||
|
||||
These two hook implementations work together to abort incremental-marked
|
||||
tests in a class. Here is a test module example::
|
||||
|
||||
# content of test_step.py
|
||||
|
||||
import pytest
|
||||
|
||||
@pytest.mark.incremental
|
||||
class TestUserHandling:
|
||||
def test_login(self):
|
||||
pass
|
||||
def test_modification(self):
|
||||
assert 0
|
||||
def test_deletion(self):
|
||||
pass
|
||||
|
||||
def test_normal():
|
||||
pass
|
||||
|
||||
If we run this::
|
||||
|
||||
$ py.test -rx
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 4 items
|
||||
|
||||
test_step.py .Fx.
|
||||
|
||||
================================= FAILURES =================================
|
||||
____________________ TestUserHandling.test_modification ____________________
|
||||
|
||||
self = <test_step.TestUserHandling instance at 0x1db2638>
|
||||
|
||||
def test_modification(self):
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
test_step.py:9: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
XFAIL test_step.py::TestUserHandling::()::test_deletion
|
||||
reason: previous test failed (test_modification)
|
||||
============== 1 failed, 2 passed, 1 xfailed in 0.01 seconds ===============
|
||||
|
||||
We'll see that ``test_deletion`` was not executed because ``test_modification``
|
||||
failed. It is reported as an "expected failure".
|
||||
|
||||
@@ -3,12 +3,81 @@ Some Issues and Questions
|
||||
|
||||
.. note::
|
||||
|
||||
If you don't find an answer here, checkout the :ref:`contact channels`
|
||||
to get help.
|
||||
If you don't find an answer here, you may checkout
|
||||
`pytest Q&A at Stackoverflow <http://stackoverflow.com/search?q=pytest>`_
|
||||
or other :ref:`contact channels` to get help.
|
||||
|
||||
On naming, nosetests, licensing and magic
|
||||
------------------------------------------------
|
||||
|
||||
How does py.test relate to nose and unittest?
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
py.test and nose_ share basic philosophy when it comes
|
||||
to running and writing Python tests. In fact, you can run many tests
|
||||
written for nose with py.test. nose_ was originally created
|
||||
as a clone of ``py.test`` when py.test was in the ``0.8`` release
|
||||
cycle. Note that starting with pytest-2.0 support for running unittest
|
||||
test suites is majorly improved.
|
||||
|
||||
how does py.test relate to twisted's trial?
|
||||
++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
Since some time py.test has builtin support for supporting tests
|
||||
written using trial. It does not itself start a reactor, however,
|
||||
and does not handle Deferreds returned from a test. Someone using
|
||||
these features might eventually write a dedicated ``pytest-twisted``
|
||||
plugin which will surely see strong support from the pytest development
|
||||
team.
|
||||
|
||||
how does py.test work with Django?
|
||||
++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
In 2012, some work is going into the `pytest-django plugin <http://pypi.python.org/pypi/pytest-django>`_. It substitutes the usage of Django's
|
||||
``manage.py test`` and allows to use all pytest features_ most of which
|
||||
are not available from Django directly.
|
||||
|
||||
.. _features: features.html
|
||||
|
||||
|
||||
What's this "magic" with py.test? (historic notes)
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
Around 2007 (version ``0.8``) some people thought that py.test
|
||||
was using too much "magic". It had been part of the `pylib`_ which
|
||||
contains a lot of unreleated python library code. Around 2010 there
|
||||
was a major cleanup refactoring, which removed unused or deprecated code
|
||||
and resulted in the new ``pytest`` PyPI package which strictly contains
|
||||
only test-related code. This relese also brought a complete pluginification
|
||||
such that the core is around 300 lines of code and everything else is
|
||||
implemented in plugins. Thus ``pytest`` today is a small, universally runnable
|
||||
and customizable testing framework for Python. Note, however, that
|
||||
``pytest`` uses metaprogramming techniques and reading its source is
|
||||
thus likely not something for Python beginners.
|
||||
|
||||
A second "magic" issue was the assert statement debugging feature.
|
||||
Nowadays, py.test explicitely rewrites assert statements in test modules
|
||||
in order to provide more useful :ref:`assert feedback <assertfeedback>`.
|
||||
This completely avoids previous issues of confusing assertion-reporting.
|
||||
It also means, that you can use Python's ``-O`` optimization without loosing
|
||||
assertions in test modules.
|
||||
|
||||
py.test contains a second mostly obsolete assert debugging technique,
|
||||
invoked via ``--assert=reinterpret``, activated by default on
|
||||
Python-2.5: When an ``assert`` statement fails, py.test re-interprets
|
||||
the expression part to show intermediate values. This technique suffers
|
||||
from a caveat that the rewriting does not: If your expression has side
|
||||
effects (better to avoid them anyway!) the intermediate values may not
|
||||
be the same, confusing the reinterpreter and obfuscating the initial
|
||||
error (this is also explained at the command line if it happens).
|
||||
|
||||
You can also turn off all assertion interaction using the
|
||||
``--assertmode=off`` option.
|
||||
|
||||
.. _`py namespaces`: index.html
|
||||
.. _`py/__init__.py`: http://bitbucket.org/hpk42/py-trunk/src/trunk/py/__init__.py
|
||||
|
||||
|
||||
Why a ``py.test`` instead of a ``pytest`` command?
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
@@ -18,55 +87,14 @@ utilities, all starting with ``py.<TAB>``, thus providing nice
|
||||
TAB-completion. If
|
||||
you install ``pip install pycmd`` you get these tools from a separate
|
||||
package. These days the command line tool could be called ``pytest``
|
||||
since many people have gotten used to the old name and there
|
||||
but since many people have gotten used to the old name and there
|
||||
is another tool named "pytest" we just decided to stick with
|
||||
``py.test``.
|
||||
|
||||
How does py.test relate to nose and unittest?
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
py.test and nose_ share basic philosophy when it comes
|
||||
to running and writing Python tests. In fact, you can run many tests
|
||||
written for nose with py.test. nose_ was originally created
|
||||
as a clone of ``py.test`` when py.test was in the ``0.8`` release
|
||||
cycle. Note that starting with pytest-2.0 support for running unittest
|
||||
test suites is majorly improved and you should be able to run
|
||||
many Django and Twisted test suites without modification.
|
||||
|
||||
.. _features: test/features.html
|
||||
|
||||
|
||||
What's this "magic" with py.test?
|
||||
++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
Around 2007 (version ``0.8``) some people claimed that py.test
|
||||
was using too much "magic". Partly this has been fixed by removing
|
||||
unused, deprecated or complicated code. It is today probably one
|
||||
of the smallest, most universally runnable and most
|
||||
customizable testing frameworks for Python. However,
|
||||
``py.test`` still uses many metaprogramming techniques and
|
||||
reading its source is thus likely not something for Python beginners.
|
||||
|
||||
A second "magic" issue arguably the assert statement debugging feature. When
|
||||
loading test modules py.test rewrites the source code of assert statements. When
|
||||
a rewritten assert statement fails, its error message has more information than
|
||||
the original. py.test also has a second assert debugging technique. When an
|
||||
``assert`` statement that was missed by the rewriter fails, py.test
|
||||
re-interprets the expression to show intermediate values if a test fails. This
|
||||
second technique suffers from caveat that the rewriting does not: If your
|
||||
expression has side effects (better to avoid them anyway!) the intermediate
|
||||
values may not be the same, confusing the reinterpreter and obfuscating the
|
||||
initial error (this is also explained at the command line if it happens).
|
||||
You can turn off all assertion debugging with ``py.test --assertmode=off``.
|
||||
|
||||
.. _`py namespaces`: index.html
|
||||
.. _`py/__init__.py`: http://bitbucket.org/hpk42/py-trunk/src/trunk/py/__init__.py
|
||||
|
||||
``py.test`` for now.
|
||||
|
||||
Function arguments, parametrized tests and setup
|
||||
-------------------------------------------------------
|
||||
|
||||
.. _funcargs: test/funcargs.html
|
||||
.. _funcargs: funcargs.html
|
||||
|
||||
Is using funcarg- versus xUnit setup a style question?
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
@@ -80,9 +108,9 @@ code (like e.g. the monkeypatch_, the tmpdir_ or capture_ funcargs)
|
||||
because the support code can register setup/teardown functions
|
||||
in a managed class/module/function scope.
|
||||
|
||||
.. _monkeypatch: test/plugin/monkeypatch.html
|
||||
.. _tmpdir: test/plugin/tmpdir.html
|
||||
.. _capture: test/plugin/capture.html
|
||||
.. _monkeypatch: monkeypatch.html
|
||||
.. _tmpdir: tmpdir.html
|
||||
.. _capture: capture.html
|
||||
|
||||
.. _`why pytest_pyfuncarg__ methods?`:
|
||||
|
||||
@@ -91,13 +119,18 @@ Why the ``pytest_funcarg__*`` name for funcarg factories?
|
||||
|
||||
We like `Convention over Configuration`_ and didn't see much point
|
||||
in allowing a more flexible or abstract mechanism. Moreover,
|
||||
is is nice to be able to search for ``pytest_funcarg__MYARG`` in
|
||||
a source code and safely find all factory functions for
|
||||
it is nice to be able to search for ``pytest_funcarg__MYARG`` in
|
||||
source code and safely find all factory functions for
|
||||
the ``MYARG`` function argument.
|
||||
|
||||
.. note::
|
||||
|
||||
With pytest-2.3 you can use the :ref:`@pytest.fixture` decorator
|
||||
to mark a function as a fixture function.
|
||||
|
||||
.. _`Convention over Configuration`: http://en.wikipedia.org/wiki/Convention_over_Configuration
|
||||
|
||||
Can I yield multiple values from a funcarg factory function?
|
||||
Can I yield multiple values from a fixture function function?
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
There are two conceptual reasons why yielding from a factory function
|
||||
@@ -113,8 +146,12 @@ is not possible:
|
||||
policy - in real-world examples some combinations
|
||||
often should not run.
|
||||
|
||||
Use the `pytest_generate_tests`_ hook to solve both issues
|
||||
and implement the `parametrization scheme of your choice`_.
|
||||
However, with pytest-2.3 you can use the :ref:`@pytest.fixture` decorator
|
||||
and specify ``params`` so that all tests depending on the factory-created
|
||||
resource will run multiple times with different parameters.
|
||||
|
||||
You can also use the `pytest_generate_tests`_ hook to
|
||||
implement the `parametrization scheme of your choice`_.
|
||||
|
||||
.. _`pytest_generate_tests`: test/funcargs.html#parametrizing-tests
|
||||
.. _`parametrization scheme of your choice`: http://tetamap.wordpress.com/2009/05/13/parametrizing-python-tests-generalized/
|
||||
673
doc/en/fixture.txt
Normal file
673
doc/en/fixture.txt
Normal file
@@ -0,0 +1,673 @@
|
||||
.. _fixture:
|
||||
.. _fixtures:
|
||||
.. _`fixture functions`:
|
||||
|
||||
pytest fixtures: explicit, modular, scalable
|
||||
========================================================
|
||||
|
||||
.. currentmodule:: _pytest.python
|
||||
|
||||
.. versionadded:: 2.0/2.3
|
||||
|
||||
.. _`xUnit`: http://en.wikipedia.org/wiki/XUnit
|
||||
.. _`general purpose of test fixtures`: http://en.wikipedia.org/wiki/Test_fixture#Software
|
||||
.. _`Dependency injection`: http://en.wikipedia.org/wiki/Dependency_injection#Definition
|
||||
|
||||
The `general purpose of test fixtures`_ is to provide a fixed baseline
|
||||
upon which tests can reliably and repeatedly execute. pytest-2.3 fixtures
|
||||
offer dramatic improvements over the classic xUnit style of setup/teardown
|
||||
functions:
|
||||
|
||||
* fixtures have explicit names and are activated by declaring their use
|
||||
from test functions, modules, classes or whole projects.
|
||||
|
||||
* fixtures are implemented in a modular manner, as each fixture name
|
||||
triggers a *fixture function* which can itself easily use other
|
||||
fixtures.
|
||||
|
||||
* fixture management scales from simple unit to complex
|
||||
functional testing, allowing to parametrize fixtures or tests according
|
||||
to configuration and component options.
|
||||
|
||||
In addition, pytest continues to support :ref:`xunitsetup` which it
|
||||
originally introduced in 2005. You can mix both styles, moving
|
||||
incrementally from classic to new style, if you prefer. You can also
|
||||
start out from existing :ref:`unittest.TestCase style <unittest.TestCase>`
|
||||
or :ref:`nose based <nosestyle>` projects.
|
||||
|
||||
.. _`funcargs`:
|
||||
.. _`funcarg mechanism`:
|
||||
.. _`fixture function`:
|
||||
.. _`@pytest.fixture`:
|
||||
.. _`pytest.fixture`:
|
||||
|
||||
Fixtures as Function arguments (funcargs)
|
||||
-----------------------------------------
|
||||
|
||||
Test functions can receive fixture objects by naming them as an input
|
||||
argument. For each argument name, a fixture function with that name provides
|
||||
the fixture object. Fixture functions are registered by marking them with
|
||||
:py:func:`@pytest.fixture <pytest.fixture>`. Let's look at a simple
|
||||
self-contained test module containing a fixture and a test function
|
||||
using it::
|
||||
|
||||
# content of ./test_smtpsimple.py
|
||||
import pytest
|
||||
|
||||
@pytest.fixture
|
||||
def smtp():
|
||||
import smtplib
|
||||
return smtplib.SMTP("merlinux.eu")
|
||||
|
||||
def test_ehlo(smtp):
|
||||
response, msg = smtp.ehlo()
|
||||
assert response == 250
|
||||
assert "merlinux" in msg
|
||||
assert 0 # for demo purposes
|
||||
|
||||
Here, the ``test_function`` needs the ``smtp`` fixture value. pytest
|
||||
will discover and call the ``@pytest.fixture`` marked ``smtp``
|
||||
fixture function. Running the test looks like this::
|
||||
|
||||
$ py.test test_smtpsimple.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 1 items
|
||||
|
||||
test_smtpsimple.py F
|
||||
|
||||
================================= FAILURES =================================
|
||||
________________________________ test_ehlo _________________________________
|
||||
|
||||
smtp = <smtplib.SMTP instance at 0x1eade18>
|
||||
|
||||
def test_ehlo(smtp):
|
||||
response, msg = smtp.ehlo()
|
||||
assert response == 250
|
||||
assert "merlinux" in msg
|
||||
> assert 0 # for demo purposes
|
||||
E assert 0
|
||||
|
||||
test_smtpsimple.py:12: AssertionError
|
||||
========================= 1 failed in 0.20 seconds =========================
|
||||
|
||||
In the failure traceback we see that the test function was called with a
|
||||
``smtp`` argument, the ``smtplib.SMTP()`` instance created by the fixture
|
||||
function. The test function fails on our deliberate ``assert 0``. Here is
|
||||
an exact protocol of how py.test comes to call the test function this way:
|
||||
|
||||
1. pytest :ref:`finds <test discovery>` the ``test_ehlo`` because
|
||||
of the ``test_`` prefix. The test function needs a function argument
|
||||
named ``smtp``. A matching fixture function is discovered by
|
||||
looking for a fixture-marked function named ``smtp``.
|
||||
|
||||
2. ``smtp()`` is called to create an instance.
|
||||
|
||||
3. ``test_ehlo(<SMTP instance>)`` is called and fails in the last
|
||||
line of the test function.
|
||||
|
||||
Note that if you misspell a function argument or want
|
||||
to use one that isn't available, you'll see an error
|
||||
with a list of available function arguments.
|
||||
|
||||
.. Note::
|
||||
|
||||
You can always issue::
|
||||
|
||||
py.test --fixtures test_simplefactory.py
|
||||
|
||||
to see available fixtures.
|
||||
|
||||
In versions prior to 2.3 there was no ``@pytest.fixture`` marker
|
||||
and you had to use a magic ``pytest_funcarg__NAME`` prefix
|
||||
for the fixture factory. This remains and will remain supported
|
||||
but is not anymore advertised as the primary means of declaring fixture
|
||||
functions.
|
||||
|
||||
Funcargs a prime example of dependency injection
|
||||
---------------------------------------------------
|
||||
|
||||
When injecting fixtures to test functions, pytest-2.0 introduced the
|
||||
term "funcargs" or "funcarg mechanism" which continues to be present
|
||||
also in pytest-2.3 docs. It now refers to the specific case of injecting
|
||||
fixture values as arguments to test functions. With pytest-2.3 there are
|
||||
more possibilities to use fixtures but "funcargs" probably will remain
|
||||
as the main way of dealing with fixtures.
|
||||
|
||||
As the following examples show in more detail, funcargs allow test
|
||||
functions to easily receive and work against specific pre-initialized
|
||||
application objects without having to care about import/setup/cleanup
|
||||
details. It's a prime example of `dependency injection`_ where fixture
|
||||
functions take the role of the *injector* and test functions are the
|
||||
*consumers* of fixture objects.
|
||||
|
||||
.. _smtpshared:
|
||||
|
||||
Working with a session-shared fixture
|
||||
-----------------------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
Fixtures requiring network access depend on connectivity and are
|
||||
usually time-expensive to create. Extending the previous example, we
|
||||
can add a ``scope='session'`` parameter to ``@pytest.fixture`` decorator.
|
||||
to cause the decorated ``smtp`` fixture function to only be invoked once
|
||||
per test session. Multiple test functions will thus each receive the
|
||||
same ``smtp`` fixture instance. The next example also extracts
|
||||
the fixture function into a separate ``conftest.py`` file so that
|
||||
all tests in the directory can access the fixture function::
|
||||
|
||||
# content of conftest.py
|
||||
import pytest
|
||||
import smtplib
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def smtp():
|
||||
return smtplib.SMTP("merlinux.eu")
|
||||
|
||||
The name of the fixture again is ``smtp`` and you can access its result by
|
||||
listing the name ``smtp`` as an input parameter in any test or setup
|
||||
function (in or below the directory where ``conftest.py`` is located)::
|
||||
|
||||
# content of test_module.py
|
||||
|
||||
def test_ehlo(smtp):
|
||||
response = smtp.ehlo()
|
||||
assert response[0] == 250
|
||||
assert "merlinux" in response[1]
|
||||
assert 0 # for demo purposes
|
||||
|
||||
def test_noop(smtp):
|
||||
response = smtp.noop()
|
||||
assert response[0] == 250
|
||||
assert 0 # for demo purposes
|
||||
|
||||
We deliberately insert failing ``assert 0`` statements in order to
|
||||
inspect what is going on and can now run the tests::
|
||||
|
||||
$ py.test test_module.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0
|
||||
collected 2 items
|
||||
|
||||
test_module.py FF
|
||||
|
||||
================================= FAILURES =================================
|
||||
________________________________ test_ehlo _________________________________
|
||||
|
||||
smtp = <smtplib.SMTP instance at 0x29213b0>
|
||||
|
||||
def test_ehlo(smtp):
|
||||
response = smtp.ehlo()
|
||||
assert response[0] == 250
|
||||
assert "merlinux" in response[1]
|
||||
> assert 0 # for demo purposes
|
||||
E assert 0
|
||||
|
||||
test_module.py:6: AssertionError
|
||||
________________________________ test_noop _________________________________
|
||||
|
||||
smtp = <smtplib.SMTP instance at 0x29213b0>
|
||||
|
||||
def test_noop(smtp):
|
||||
response = smtp.noop()
|
||||
assert response[0] == 250
|
||||
> assert 0 # for demo purposes
|
||||
E assert 0
|
||||
|
||||
test_module.py:11: AssertionError
|
||||
========================= 2 failed in 0.20 seconds =========================
|
||||
|
||||
You see the two ``assert 0`` failing and more importantly you can also see
|
||||
that the same (session-scoped) ``smtp`` object was passed into the two
|
||||
test functions because pytest shows the incoming argument values in the
|
||||
traceback. As a result, the two test functions using ``smtp`` run as
|
||||
quick as a single one because they reuse the same instance.
|
||||
|
||||
.. _`request-context`:
|
||||
|
||||
Fixtures can interact with the requesting test context
|
||||
-------------------------------------------------------------
|
||||
|
||||
Using the special :py:class:`request <FixtureRequest>` argument,
|
||||
fixture functions can introspect the function, class or module for which
|
||||
they are invoked or can register finalizing (cleanup)
|
||||
functions which are called when the last test finished execution.
|
||||
|
||||
Further extending the previous ``smtp`` fixture example, let's try to
|
||||
read the server URL from the module namespace, also use module-caching and
|
||||
register a finalizer that closes the smtp connection after the last
|
||||
test finished execution::
|
||||
|
||||
# content of conftest.py
|
||||
import pytest
|
||||
import smtplib
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def smtp(request):
|
||||
server = getattr(request.module, "smtpserver", "merlinux.eu")
|
||||
smtp = smtplib.SMTP(server)
|
||||
def fin():
|
||||
print ("finalizing %s" % smtp)
|
||||
smtp.close()
|
||||
request.addfinalizer(fin)
|
||||
return smtp
|
||||
|
||||
The registered ``fin`` function will be called when the last test
|
||||
using it has executed::
|
||||
|
||||
$ py.test -s -q --tb=no
|
||||
FF
|
||||
finalizing <smtplib.SMTP instance at 0x1a9d5a8>
|
||||
|
||||
We see that the ``smtp`` instance is finalized after the two
|
||||
tests using it tests executed. If we had specified ``scope='function'``
|
||||
then fixture setup and cleanup would occur around each single test.
|
||||
Note that the test module itself did not need to change!
|
||||
|
||||
Let's quickly create another test module that actually sets the
|
||||
server URL and has a test to verify the fixture picks it up::
|
||||
|
||||
# content of test_anothersmtp.py
|
||||
|
||||
smtpserver = "mail.python.org" # will be read by smtp fixture
|
||||
|
||||
def test_showhelo(smtp):
|
||||
assert 0, smtp.helo()
|
||||
|
||||
Running it::
|
||||
|
||||
$ py.test -qq --tb=short test_anothersmtp.py
|
||||
F
|
||||
================================= FAILURES =================================
|
||||
______________________________ test_showhelo _______________________________
|
||||
test_anothersmtp.py:5: in test_showhelo
|
||||
> assert 0, smtp.helo()
|
||||
E AssertionError: (250, 'mail.python.org')
|
||||
|
||||
.. _`request`: :py:class:`_pytest.python.FixtureRequest`
|
||||
|
||||
.. _`fixture-parametrize`:
|
||||
|
||||
Parametrizing a session-shared fixture
|
||||
-----------------------------------------------------------------
|
||||
|
||||
Fixture functions can be parametrized in which case they will be called
|
||||
multiple times, each time executing the set of dependent tests, i. e. the
|
||||
tests that depend on this fixture. Test functions do usually not need
|
||||
to be aware of their re-running. Fixture parametrization helps to
|
||||
write exhaustive functional tests for components which themselves can be
|
||||
configured in multiple ways.
|
||||
|
||||
Extending the previous example, we can flag the fixture to create two
|
||||
``smtp`` fixture instances which will cause all tests using the fixture
|
||||
to run twice. The fixture function gets access to each parameter
|
||||
through the special `request`_ object::
|
||||
|
||||
# content of conftest.py
|
||||
import pytest
|
||||
import smtplib
|
||||
|
||||
@pytest.fixture(scope="session",
|
||||
params=["merlinux.eu", "mail.python.org"])
|
||||
def smtp(request):
|
||||
smtp = smtplib.SMTP(request.param)
|
||||
def fin():
|
||||
print ("finalizing %s" % smtp)
|
||||
smtp.close()
|
||||
request.addfinalizer(fin)
|
||||
return smtp
|
||||
|
||||
The main change is the declaration of ``params``, a list of values
|
||||
for each of which the fixture function will execute and can access
|
||||
a value via ``request.param``. No test function code needs to change.
|
||||
So let's just do another run::
|
||||
|
||||
$ py.test -q test_module.py
|
||||
FFFF
|
||||
================================= FAILURES =================================
|
||||
__________________________ test_ehlo[merlinux.eu] __________________________
|
||||
|
||||
smtp = <smtplib.SMTP instance at 0x1eae680>
|
||||
|
||||
def test_ehlo(smtp):
|
||||
response = smtp.ehlo()
|
||||
assert response[0] == 250
|
||||
assert "merlinux" in response[1]
|
||||
> assert 0 # for demo purposes
|
||||
E assert 0
|
||||
|
||||
test_module.py:6: AssertionError
|
||||
__________________________ test_noop[merlinux.eu] __________________________
|
||||
|
||||
smtp = <smtplib.SMTP instance at 0x1eae680>
|
||||
|
||||
def test_noop(smtp):
|
||||
response = smtp.noop()
|
||||
assert response[0] == 250
|
||||
> assert 0 # for demo purposes
|
||||
E assert 0
|
||||
|
||||
test_module.py:11: AssertionError
|
||||
________________________ test_ehlo[mail.python.org] ________________________
|
||||
|
||||
smtp = <smtplib.SMTP instance at 0x1eb7fc8>
|
||||
|
||||
def test_ehlo(smtp):
|
||||
response = smtp.ehlo()
|
||||
assert response[0] == 250
|
||||
> assert "merlinux" in response[1]
|
||||
E assert 'merlinux' in 'mail.python.org\nSIZE 10240000\nETRN\nSTARTTLS\nENHANCEDSTATUSCODES\n8BITMIME\nDSN'
|
||||
|
||||
test_module.py:5: AssertionError
|
||||
________________________ test_noop[mail.python.org] ________________________
|
||||
|
||||
smtp = <smtplib.SMTP instance at 0x1eb7fc8>
|
||||
|
||||
def test_noop(smtp):
|
||||
response = smtp.noop()
|
||||
assert response[0] == 250
|
||||
> assert 0 # for demo purposes
|
||||
E assert 0
|
||||
|
||||
test_module.py:11: AssertionError
|
||||
|
||||
We see that our two test functions each ran twice, against the different
|
||||
``smtp`` instances. Note also, that with the ``mail.python.org``
|
||||
connection the second test fails in ``test_ehlo`` because a
|
||||
different server string is expected than what arrived.
|
||||
|
||||
|
||||
.. _`interdependent fixtures`:
|
||||
|
||||
Modularity: using fixtures from a fixture function
|
||||
----------------------------------------------------------
|
||||
|
||||
You can not only use fixtures in test functions but fixture functions
|
||||
can use other fixtures themselves. This contributes to a modular design
|
||||
of your fixtures and allows re-use of framework-specific fixtures across
|
||||
many projects. As a simple example, we can extend the previous example
|
||||
and instantiate an object ``app`` where we stick the already defined
|
||||
``smtp`` resource into it::
|
||||
|
||||
# content of test_appsetup.py
|
||||
|
||||
import pytest
|
||||
|
||||
class App:
|
||||
def __init__(self, smtp):
|
||||
self.smtp = smtp
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def app(smtp):
|
||||
return App(smtp)
|
||||
|
||||
def test_smtp_exists(app):
|
||||
assert app.smtp
|
||||
|
||||
Here we declare an ``app`` fixture which receives the previously defined
|
||||
``smtp`` fixture and instantiates an ``App`` object with it. Let's run it::
|
||||
|
||||
$ py.test -v test_appsetup.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_appsetup.py:12: test_smtp_exists[merlinux.eu] PASSED
|
||||
test_appsetup.py:12: test_smtp_exists[mail.python.org] PASSED
|
||||
|
||||
========================= 2 passed in 0.17 seconds =========================
|
||||
|
||||
Due to the parametrization of ``smtp`` the test will run twice with two
|
||||
different ``App`` instances and respective smtp servers. There is no
|
||||
need for the ``app`` fixture to be aware of the ``smtp`` parametrization
|
||||
as pytest will fully analyse the fixture dependency graph. Note also,
|
||||
that the ``app`` fixture has a scope of ``module`` but may use a
|
||||
session-scoped ``smtp`` from the example higher up: it is fine for
|
||||
fixtures to use "broader" scoped fixtures but not the other way round:
|
||||
A session-scoped fixture could not use a module-scoped one in a
|
||||
meaningful way.
|
||||
|
||||
|
||||
.. _`automatic per-resource grouping`:
|
||||
|
||||
Automatic grouping of tests by fixture instances
|
||||
----------------------------------------------------------
|
||||
|
||||
.. regendoc: wipe
|
||||
|
||||
pytest minimizes the number of active fixtures during test runs.
|
||||
If you have a parametrized fixture, then all the tests using it will
|
||||
first execute with one instance and then finalizers are called
|
||||
before the next fixture instance is created. Among other things,
|
||||
this eases testing of applications which create and use global state.
|
||||
|
||||
The following example uses two parametrized funcargs, one of which is
|
||||
scoped on a per-module basis, and all the functions perform ``print`` calls
|
||||
to show the setup/teardown flow::
|
||||
|
||||
# content of test_module.py
|
||||
import pytest
|
||||
|
||||
@pytest.fixture(scope="module", params=["mod1", "mod2"])
|
||||
def modarg(request):
|
||||
param = request.param
|
||||
print "create", param
|
||||
def fin():
|
||||
print "fin", param
|
||||
request.addfinalizer(fin)
|
||||
return param
|
||||
|
||||
@pytest.fixture(scope="function", params=[1,2])
|
||||
def otherarg(request):
|
||||
return request.param
|
||||
|
||||
def test_0(otherarg):
|
||||
print " test0", otherarg
|
||||
def test_1(modarg):
|
||||
print " test1", modarg
|
||||
def test_2(otherarg, modarg):
|
||||
print " test2", otherarg, modarg
|
||||
|
||||
Let's run the tests in verbose mode and with looking at the print-output::
|
||||
|
||||
$ py.test -v -s test_module.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.0 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
collecting ... collected 8 items
|
||||
|
||||
test_module.py:16: test_0[1] PASSED
|
||||
test_module.py:16: test_0[2] PASSED
|
||||
test_module.py:18: test_1[mod1] PASSED
|
||||
test_module.py:20: test_2[1-mod1] PASSED
|
||||
test_module.py:20: test_2[2-mod1] PASSED
|
||||
test_module.py:18: test_1[mod2] PASSED
|
||||
test_module.py:20: test_2[1-mod2] PASSED
|
||||
test_module.py:20: test_2[2-mod2] PASSED
|
||||
|
||||
========================= 8 passed in 0.01 seconds =========================
|
||||
test0 1
|
||||
test0 2
|
||||
create mod1
|
||||
test1 mod1
|
||||
test2 1 mod1
|
||||
test2 2 mod1
|
||||
fin mod1
|
||||
create mod2
|
||||
test1 mod2
|
||||
test2 1 mod2
|
||||
test2 2 mod2
|
||||
fin mod2
|
||||
|
||||
You can see that the parametrized module-scoped ``modarg`` resource caused
|
||||
an ordering of test execution that lead to the fewest possible "active" resources. The finalizer for the ``mod1`` parametrized resource was executed
|
||||
before the ``mod2`` resource was setup.
|
||||
|
||||
.. _`usefixtures`:
|
||||
|
||||
|
||||
using fixtures from classes, modules or projects
|
||||
----------------------------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
Sometimes test functions do not directly need access to a fixture object.
|
||||
For example, tests may require to operate with an empty directory as the
|
||||
current working directory but otherwise do not care for the concrete
|
||||
directory. Here is how you can can use the standard `tempfile
|
||||
<http://docs.python.org/library/tempfile.html>`_ and pytest fixtures to
|
||||
achieve it. We separate the creation of the fixture into a conftest.py
|
||||
file::
|
||||
|
||||
# content of conftest.py
|
||||
|
||||
import pytest
|
||||
import tempfile
|
||||
import os
|
||||
|
||||
@pytest.fixture()
|
||||
def cleandir():
|
||||
newpath = tempfile.mkdtemp()
|
||||
os.chdir(newpath)
|
||||
|
||||
and declare its use in a test module via a ``usefixtures`` marker::
|
||||
|
||||
# content of test_setenv.py
|
||||
import os
|
||||
import pytest
|
||||
|
||||
@pytest.mark.usefixtures("cleandir")
|
||||
class TestDirectoryInit:
|
||||
def test_cwd_starts_empty(self):
|
||||
assert os.listdir(os.getcwd()) == []
|
||||
with open("myfile", "w") as f:
|
||||
f.write("hello")
|
||||
|
||||
def test_cwd_again_starts_empty(self):
|
||||
assert os.listdir(os.getcwd()) == []
|
||||
|
||||
Due to the ``usefixtures`` marker, the ``cleandir`` fixture
|
||||
will be required for the execution of each test method, just as if
|
||||
you specified a "cleandir" function argument to each of them. Let's run it
|
||||
to verify our fixture is activated and the tests pass::
|
||||
|
||||
$ py.test -q
|
||||
..
|
||||
|
||||
You can specify multiple fixtures like this::
|
||||
|
||||
@pytest.mark.usefixtures("cleandir", "anotherfixture")
|
||||
|
||||
and you may specify fixture usage at the test module level, using
|
||||
a generic feature of the mark mechanism::
|
||||
|
||||
pytestmark = pytest.mark.usefixtures("cleandir")
|
||||
|
||||
Lastly you can put fixtures required by all tests in your project
|
||||
into an ini-file::
|
||||
|
||||
# content of pytest.ini
|
||||
|
||||
[pytest]
|
||||
usefixtures = cleandir
|
||||
|
||||
|
||||
.. _`autouse fixtures`:
|
||||
|
||||
autouse fixtures (xUnit setup on steroids)
|
||||
----------------------------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
Occasionally, you may want to have fixtures get invoked automatically
|
||||
without a `usefixtures`_ or `funcargs`_ reference. As a practical
|
||||
example, suppose we have a database fixture which has a
|
||||
begin/rollback/commit architecture and we want to automatically surround
|
||||
each test method by a transaction and a rollback. Here is a dummy
|
||||
self-contained implementation of this idea::
|
||||
|
||||
# content of test_db_transact.py
|
||||
|
||||
import pytest
|
||||
|
||||
class DB:
|
||||
def __init__(self):
|
||||
self.intransaction = []
|
||||
def begin(self, name):
|
||||
self.intransaction.append(name)
|
||||
def rollback(self):
|
||||
self.intransaction.pop()
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def db():
|
||||
return DB()
|
||||
|
||||
class TestClass:
|
||||
@pytest.fixture(autouse=True)
|
||||
def transact(self, request, db):
|
||||
db.begin(request.function.__name__)
|
||||
request.addfinalizer(db.rollback)
|
||||
|
||||
def test_method1(self, db):
|
||||
assert db.intransaction == ["test_method1"]
|
||||
|
||||
def test_method2(self, db):
|
||||
assert db.intransaction == ["test_method2"]
|
||||
|
||||
The class-level ``transact`` fixture is marked with *autouse=true*
|
||||
which implies that all test methods in the class will use this fixture
|
||||
without a need to state it in the test function signature or with a
|
||||
class-level ``usefixtures`` decorator.
|
||||
|
||||
If we run it, we get two passing tests::
|
||||
|
||||
$ py.test -q
|
||||
..
|
||||
|
||||
Here is how autouse fixtures work in other scopes:
|
||||
|
||||
- if an autouse fixture is defined in a test module, all its test
|
||||
functions automatically use it.
|
||||
|
||||
- if an autouse fixture is defined in a conftest.py file then all tests in
|
||||
all test modules belows its directory will invoke the fixture.
|
||||
|
||||
- lastly, and **please use that with care**: if you define an autouse
|
||||
fixture in a plugin, it will be invoked for all tests in all projects
|
||||
where the plugin is installed. This can be useful if a fixture only
|
||||
anyway works in the presence of certain settings e. g. in the ini-file. Such
|
||||
a global fixture should always quickly determine if it should do
|
||||
any work and avoid expensive imports or computation otherwise.
|
||||
|
||||
Note that the above ``transact`` fixture may very well be a fixture that
|
||||
you want to make available in your project without having it generally
|
||||
active. The canonical way to do that is to put the transact definition
|
||||
into a conftest.py file **without** using ``autouse``::
|
||||
|
||||
# content of conftest.py
|
||||
@pytest.fixture()
|
||||
def transact(self, request, db):
|
||||
db.begin()
|
||||
request.addfinalizer(db.rollback)
|
||||
|
||||
and then e.g. have a TestClass using it by declaring the need::
|
||||
|
||||
@pytest.mark.usefixtures("transact")
|
||||
class TestClass:
|
||||
def test_method1(self):
|
||||
...
|
||||
|
||||
All test methods in this TestClass will use the transaction fixture while
|
||||
other test classes or functions in the module will not use it unless
|
||||
they also add a ``transact`` reference.
|
||||
|
||||
Shifting (visibility of) fixture functions
|
||||
----------------------------------------------------
|
||||
|
||||
If during implementing your tests you realize that you
|
||||
want to use a fixture function from multiple test files you can move it
|
||||
to a :ref:`conftest.py <conftest.py>` file or even separately installable
|
||||
:ref:`plugins <plugins>` without changing test code. The discovery of
|
||||
fixtures functions starts at test classes, then test modules, then
|
||||
``conftest.py`` files and finally builtin and third party plugins.
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user