Compare commits
395 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
6ad16936bb | ||
|
|
bcb8dc71d2 | ||
|
|
b8277bfed8 | ||
|
|
2637326782 | ||
|
|
aa79c0a4b9 | ||
|
|
05c86aeb28 | ||
|
|
f28f073c7c | ||
|
|
036557ac18 | ||
|
|
1b61fbc8ed | ||
|
|
97f03edcd6 | ||
|
|
7e5efa0005 | ||
|
|
d55fc611c4 | ||
|
|
720fe3405b | ||
|
|
c894b2b459 | ||
|
|
6d5bf4b908 | ||
|
|
d4d213f83d | ||
|
|
289ee1c6ea | ||
|
|
f41f7fda68 | ||
|
|
9ed127b5da | ||
|
|
525b08bc5c | ||
|
|
fae34ca5e3 | ||
|
|
0852e84d9f | ||
|
|
76db624639 | ||
|
|
1e6ec9941c | ||
|
|
a5ce481022 | ||
|
|
dca5fa2241 | ||
|
|
586befb945 | ||
|
|
b0b6695538 | ||
|
|
024df6e00b | ||
|
|
5e28f461c8 | ||
|
|
64544bee1a | ||
|
|
7c8755cc89 | ||
|
|
7d747a1cde | ||
|
|
dbaedbacde | ||
|
|
cf17f1d628 | ||
|
|
67de2c53ac | ||
|
|
26ab80c4cd | ||
|
|
20849a44f5 | ||
|
|
51644a116c | ||
|
|
98513b995a | ||
|
|
dc4e205876 | ||
|
|
2855a2f6cb | ||
|
|
cc2337af3a | ||
|
|
ab4183d400 | ||
|
|
37965657d0 | ||
|
|
ccaa1af534 | ||
|
|
2f3bbdafda | ||
|
|
021c087701 | ||
|
|
4541456a96 | ||
|
|
f5d796b093 | ||
|
|
6eec2f5893 | ||
|
|
0594265adc | ||
|
|
fb3af07ef4 | ||
|
|
39b8a19cf7 | ||
|
|
916c1c170e | ||
|
|
df643f65f0 | ||
|
|
d630d02c5b | ||
|
|
30b10a6950 | ||
|
|
cda84fb566 | ||
|
|
d3893dd5d1 | ||
|
|
55a8bfd174 | ||
|
|
f588eae4f5 | ||
|
|
d8c365ef2c | ||
|
|
4cbb2ab3b3 | ||
|
|
d1a3f5c3a6 | ||
|
|
bb07ba7807 | ||
|
|
8282efbb40 | ||
|
|
9251e747af | ||
|
|
439cc1238f | ||
|
|
3049af618c | ||
|
|
7bc7a9b702 | ||
|
|
5173647b4d | ||
|
|
57a832812b | ||
|
|
bee7543716 | ||
|
|
b9767fd74c | ||
|
|
dbe66f468a | ||
|
|
35cbb5791d | ||
|
|
a18fd61a20 | ||
|
|
a1c3d60747 | ||
|
|
fe4ccdff0e | ||
|
|
cd1ead4f7b | ||
|
|
9568ff3b23 | ||
|
|
6e5f491a42 | ||
|
|
7768972ec5 | ||
|
|
754fab9b55 | ||
|
|
253a87b2dc | ||
|
|
81082ed3d3 | ||
|
|
465cfff6f9 | ||
|
|
738f14a48a | ||
|
|
22dc47d9f9 | ||
|
|
6cb3281ddd | ||
|
|
a5e7e441d3 | ||
|
|
a7c6688bd6 | ||
|
|
d9c24552fc | ||
|
|
631d311e89 | ||
|
|
c2480f5c54 | ||
|
|
a94bb0a8bb | ||
|
|
646c2c6001 | ||
|
|
f6b555f5ad | ||
|
|
bf5b226474 | ||
|
|
084c617b67 | ||
|
|
bfaf8e50b6 | ||
|
|
848c749d1a | ||
|
|
41ad7dbae1 | ||
|
|
93eac240a0 | ||
|
|
a6060dfb6d | ||
|
|
7f36649763 | ||
|
|
f07ebc6615 | ||
|
|
e876ad9abd | ||
|
|
503addbf09 | ||
|
|
1318df4f5b | ||
|
|
45693c2847 | ||
|
|
0e8cd9297a | ||
|
|
0cca20bef9 | ||
|
|
1446b4b4e6 | ||
|
|
aa84359bd9 | ||
|
|
f275830ca7 | ||
|
|
627e068516 | ||
|
|
f472f21406 | ||
|
|
f4963270c6 | ||
|
|
08c3b1b80f | ||
|
|
935761f098 | ||
|
|
dd268c1b2b | ||
|
|
172505f703 | ||
|
|
6746a00cb8 | ||
|
|
46dc7eeacb | ||
|
|
ae241a5071 | ||
|
|
5fd84c35dd | ||
|
|
535d892f27 | ||
|
|
cb2eb9ba33 | ||
|
|
4f94ab4e42 | ||
|
|
449b55cc70 | ||
|
|
9dc79fd187 | ||
|
|
b57fb9fd47 | ||
|
|
d68c65b493 | ||
|
|
fa61927c6b | ||
|
|
d4a487c725 | ||
|
|
76584b53a1 | ||
|
|
6b0f0adf5b | ||
|
|
396045e53f | ||
|
|
80db25822c | ||
|
|
f358fe7154 | ||
|
|
e14459d45c | ||
|
|
4e4b507472 | ||
|
|
c7ee6e71ab | ||
|
|
4766497515 | ||
|
|
38b18c44e9 | ||
|
|
a73c27da13 | ||
|
|
dbaf7ee9d0 | ||
|
|
7a90bed19b | ||
|
|
8adac2878f | ||
|
|
66ed2d123a | ||
|
|
b902c36bfc | ||
|
|
099ac1e1f4 | ||
|
|
1aca6c9d7c | ||
|
|
fe24e01a03 | ||
|
|
838e758cf7 | ||
|
|
ddd4467fdd | ||
|
|
5574e45749 | ||
|
|
74e55493d1 | ||
|
|
ecec653e98 | ||
|
|
0ba0f91720 | ||
|
|
ea49993459 | ||
|
|
b4b86159cd | ||
|
|
91b6f2bda8 | ||
|
|
227d847216 | ||
|
|
6e0c30d67d | ||
|
|
65cbf591d8 | ||
|
|
e79a312b92 | ||
|
|
42d44bfd43 | ||
|
|
ccc04b9fc4 | ||
|
|
18306a4644 | ||
|
|
1bbe1d086c | ||
|
|
672919a8e2 | ||
|
|
f176ee3a1c | ||
|
|
474b177da8 | ||
|
|
b2e87ce027 | ||
|
|
2e163e4aae | ||
|
|
63eacd9dd5 | ||
|
|
b008e489ba | ||
|
|
d5078001c9 | ||
|
|
8b3ac3b03a | ||
|
|
6af20a5290 | ||
|
|
eb1b1005ae | ||
|
|
6fd57ec786 | ||
|
|
03a814a859 | ||
|
|
b4c2161e35 | ||
|
|
9198069739 | ||
|
|
4d77653bb0 | ||
|
|
3f17784386 | ||
|
|
971f96468c | ||
|
|
c11202b549 | ||
|
|
42d63832b7 | ||
|
|
f5f3fe54d5 | ||
|
|
76ec623b22 | ||
|
|
69fc6987ad | ||
|
|
0790f7a75f | ||
|
|
db8fbe7661 | ||
|
|
91c41cd6b3 | ||
|
|
1bf1cfd07a | ||
|
|
51d94a4a6e | ||
|
|
e18abfd013 | ||
|
|
6c7ea8191f | ||
|
|
329dca42a7 | ||
|
|
0362aaba5a | ||
|
|
948dea8bb4 | ||
|
|
6155e9139d | ||
|
|
6dd8405aed | ||
|
|
c076f4e789 | ||
|
|
d32a132b51 | ||
|
|
0e3779b14f | ||
|
|
fe1c35f8d0 | ||
|
|
b4588f1798 | ||
|
|
64c7c1be15 | ||
|
|
1c817aa7bd | ||
|
|
d02eaa8881 | ||
|
|
b92176024c | ||
|
|
1c746e0819 | ||
|
|
166aae4418 | ||
|
|
58933aac2a | ||
|
|
45aa4e5229 | ||
|
|
e643e99586 | ||
|
|
9f6d6f630d | ||
|
|
812ba87f37 | ||
|
|
2b0887fa5f | ||
|
|
ee8d2f9950 | ||
|
|
51d29cf4c6 | ||
|
|
e378496b24 | ||
|
|
4d21274a29 | ||
|
|
705442cf4e | ||
|
|
87b4cb283f | ||
|
|
83505b790d | ||
|
|
2ca6d9f039 | ||
|
|
87b8769680 | ||
|
|
78e7d7aed0 | ||
|
|
68b353be0d | ||
|
|
a756dc8106 | ||
|
|
604e27658c | ||
|
|
dfa273dc25 | ||
|
|
5263656df6 | ||
|
|
d88fe07377 | ||
|
|
2e23057804 | ||
|
|
303f49a5ad | ||
|
|
adbbd164ff | ||
|
|
93424b0f9c | ||
|
|
fb7706d4c7 | ||
|
|
4131923c0f | ||
|
|
7b95af2400 | ||
|
|
eb6481c663 | ||
|
|
c126cac98d | ||
|
|
e3a8b1e062 | ||
|
|
fa6d5bd15b | ||
|
|
f2c8a837af | ||
|
|
ccc1b21ebd | ||
|
|
85f2a78005 | ||
|
|
e21202b730 | ||
|
|
dc0535f7d5 | ||
|
|
f2791988f9 | ||
|
|
8e83af1c33 | ||
|
|
268c051eba | ||
|
|
03cb37b1eb | ||
|
|
d5c3265763 | ||
|
|
13e0340350 | ||
|
|
5093d8b925 | ||
|
|
40187ec9bb | ||
|
|
f5f8695587 | ||
|
|
27f5213718 | ||
|
|
b83a3bcc80 | ||
|
|
3a3f69372f | ||
|
|
4a08ee2b74 | ||
|
|
82ba764bb6 | ||
|
|
94e31e414a | ||
|
|
a94a6b4282 | ||
|
|
af0edf0d10 | ||
|
|
8307270cec | ||
|
|
c4fe622b82 | ||
|
|
b28977fbaf | ||
|
|
96cb1208d3 | ||
|
|
0c8e71faa5 | ||
|
|
d965101f6a | ||
|
|
d15ee2fb87 | ||
|
|
826d1e6153 | ||
|
|
50c9e3f654 | ||
|
|
59b8ea1746 | ||
|
|
cf02fb60c1 | ||
|
|
679d72eedf | ||
|
|
03b23e2587 | ||
|
|
48e6823c7a | ||
|
|
6b4e6eee09 | ||
|
|
f7648e11d8 | ||
|
|
7bb7d1205c | ||
|
|
a1d41c6811 | ||
|
|
58e0301f87 | ||
|
|
a5e7b2760d | ||
|
|
efe438d3e8 | ||
|
|
ec0565fac5 | ||
|
|
48a6a504b6 | ||
|
|
8f55425898 | ||
|
|
a51e52aee3 | ||
|
|
69dfc75572 | ||
|
|
9d3e51af9f | ||
|
|
f7c1b9087a | ||
|
|
36c42b5c15 | ||
|
|
bc8ee95e72 | ||
|
|
979dfd20f2 | ||
|
|
67fbd24ebf | ||
|
|
7f7589afa9 | ||
|
|
4f01cda2a7 | ||
|
|
bd296c796f | ||
|
|
7144cec580 | ||
|
|
99a1188287 | ||
|
|
0b18b6094e | ||
|
|
ae53d04780 | ||
|
|
a324826dfd | ||
|
|
29bf205f3a | ||
|
|
3b9fd3abd8 | ||
|
|
974e4e3a9d | ||
|
|
369b7709f7 | ||
|
|
78438db752 | ||
|
|
a2f4a11301 | ||
|
|
077c468589 | ||
|
|
d4fe273b2f | ||
|
|
761a95e542 | ||
|
|
5ae04397bd | ||
|
|
2c230f910d | ||
|
|
ae54151467 | ||
|
|
05af53d160 | ||
|
|
448f1c0d9c | ||
|
|
346da57a8a | ||
|
|
9d92b19ed1 | ||
|
|
e2201fe3a9 | ||
|
|
45b98d6e70 | ||
|
|
29b4082b00 | ||
|
|
6ac638ba87 | ||
|
|
f2512017ea | ||
|
|
3bd3ba133f | ||
|
|
be249dcfe5 | ||
|
|
45afb1b7d1 | ||
|
|
922a283f99 | ||
|
|
7e857e9068 | ||
|
|
172d46abd0 | ||
|
|
ac9192e4f8 | ||
|
|
b490047b1c | ||
|
|
ad785a476c | ||
|
|
d37af98db3 | ||
|
|
4316cf2121 | ||
|
|
fb6fc673b8 | ||
|
|
eaec527a60 | ||
|
|
2bc4065a00 | ||
|
|
5c32421f2e | ||
|
|
fab7615c8a | ||
|
|
2315de8321 | ||
|
|
25711a0879 | ||
|
|
0e05a4fbcf | ||
|
|
8675cf640d | ||
|
|
8b211983ff | ||
|
|
661a8a4a92 | ||
|
|
abe080c6b4 | ||
|
|
574d230c22 | ||
|
|
88c5299a94 | ||
|
|
09933b8b04 | ||
|
|
fb1b1d9aae | ||
|
|
68a08840e1 | ||
|
|
41b8a03b05 | ||
|
|
fba2079292 | ||
|
|
9675b0f65c | ||
|
|
c426a67b0e | ||
|
|
6ca3c980bf | ||
|
|
5bd34f8ecc | ||
|
|
7636dc76e0 | ||
|
|
c5dee7b549 | ||
|
|
643ab120f4 | ||
|
|
f86c8469f5 | ||
|
|
22335acd09 | ||
|
|
8b866aa065 | ||
|
|
2c4964d290 | ||
|
|
a70293fdb7 | ||
|
|
43113f9a9d | ||
|
|
650c3bcfde | ||
|
|
ade9b9aa8e | ||
|
|
7576b3c7d0 | ||
|
|
85415135a4 | ||
|
|
3cc8697744 | ||
|
|
703da22831 | ||
|
|
14ceaf2459 | ||
|
|
f3bc197afb | ||
|
|
aafe6a8e34 | ||
|
|
46b1348b79 | ||
|
|
709da3fe84 | ||
|
|
a59c2c9e17 | ||
|
|
6096aeca53 | ||
|
|
8cd68494bf | ||
|
|
dd4b252749 | ||
|
|
59067684cd | ||
|
|
81f4a07548 |
@@ -15,7 +15,7 @@ syntax:glob
|
||||
*.orig
|
||||
*~
|
||||
|
||||
doc/_build
|
||||
doc/*/_build
|
||||
build/
|
||||
dist/
|
||||
*.egg-info
|
||||
@@ -23,3 +23,5 @@ issue/
|
||||
env/
|
||||
3rdparty/
|
||||
.tox
|
||||
.cache
|
||||
.coverage
|
||||
|
||||
10
.hgtags
10
.hgtags
@@ -41,3 +41,13 @@ c777dcad166548b7499564cb49ae5c8b4b07f935 2.0.3
|
||||
49f11dbff725acdcc5fe3657cbcdf9ae04e25bbc 2.0.3
|
||||
49f11dbff725acdcc5fe3657cbcdf9ae04e25bbc 2.0.3
|
||||
363e5a5a59c803e6bc176a6f9cc4bf1a1ca2dab0 2.0.3
|
||||
e5e1746a197f0398356a43fbe2eebac9690f795d 2.1.0
|
||||
5864412c6f3c903384243bd315639d101d7ebc67 2.1.2
|
||||
12a05d59249f80276e25fd8b96e8e545b1332b7a 2.1.3
|
||||
1522710369337d96bf9568569d5f0ca9b38a74e0 2.2.0
|
||||
3da8cec6c5326ed27c144c9b6d7a64a648370005 2.2.1
|
||||
92b916483c1e65a80dc80e3f7816b39e84b36a4d 2.2.2
|
||||
3c11c5c9776f3c678719161e96cc0a08169c1cb8 2.2.3
|
||||
ad9fe504a371ad8eb613052d58f229aa66f53527 2.2.4
|
||||
c27a60097767c16a54ae56d9669a77925b213b9b 2.3.0
|
||||
acf0e1477fb19a1d35a4e40242b77fa6af32eb17 2.3.1
|
||||
|
||||
1
AUTHORS
1
AUTHORS
@@ -22,3 +22,4 @@ Jan Balster
|
||||
Grig Gheorghiu
|
||||
Bob Ippolito
|
||||
Christian Tismer
|
||||
Daniel Nuri
|
||||
|
||||
251
CHANGELOG
251
CHANGELOG
@@ -1,3 +1,254 @@
|
||||
Changes between 2.3.1 and 2.3.2
|
||||
-----------------------------------
|
||||
|
||||
- fix issue208 and fix issue29 use new py version to avoid long pauses
|
||||
when printing tracebacks in long modules
|
||||
|
||||
- fix issue205 - conftests in subdirs customizing
|
||||
pytest_pycollect_makemodule and pytest_pycollect_makeitem
|
||||
now work properly
|
||||
|
||||
- fix teardown-ordering for parametrized setups
|
||||
|
||||
- fix issue127 - better documentation for pytest_addoption
|
||||
and related objects.
|
||||
|
||||
- fix unittest behaviour: TestCase.runtest only called if there are
|
||||
test methods defined
|
||||
|
||||
- improve trial support: don't collect its empty
|
||||
unittest.TestCase.runTest() method
|
||||
|
||||
- "python setup.py test" now works with pytest itself
|
||||
|
||||
- fix/improve internal/packaging related bits:
|
||||
|
||||
- exception message check of test_nose.py now passes on python33 as well
|
||||
|
||||
- issue206 - fix test_assertrewrite.py to work when a global
|
||||
PYTHONDONTWRITEBYTECODE=1 is present
|
||||
|
||||
- add tox.ini to pytest distribution so that ignore-dirs and others config
|
||||
bits are properly distributed for maintainers who run pytest-own tests
|
||||
|
||||
Changes between 2.3.0 and 2.3.1
|
||||
-----------------------------------
|
||||
|
||||
- fix issue202 - fix regression: using "self" from fixture functions now
|
||||
works as expected (it's the same "self" instance that a test method
|
||||
which uses the fixture sees)
|
||||
|
||||
- skip pexpect using tests (test_pdb.py mostly) on freebsd* systems
|
||||
due to pexpect not supporting it properly (hanging)
|
||||
|
||||
- link to web pages from --markers output which provides help for
|
||||
pytest.mark.* usage.
|
||||
|
||||
Changes between 2.2.4 and 2.3.0
|
||||
-----------------------------------
|
||||
|
||||
- fix issue202 - better automatic names for parametrized test functions
|
||||
- fix issue139 - introduce @pytest.fixture which allows direct scoping
|
||||
and parametrization of funcarg factories.
|
||||
- fix issue198 - conftest fixtures were not found on windows32 in some
|
||||
circumstances with nested directory structures due to path manipulation issues
|
||||
- fix issue193 skip test functions with were parametrized with empty
|
||||
parameter sets
|
||||
- fix python3.3 compat, mostly reporting bits that previously depended
|
||||
on dict ordering
|
||||
- introduce re-ordering of tests by resource and parametrization setup
|
||||
which takes precedence to the usual file-ordering
|
||||
- fix issue185 monkeypatching time.time does not cause pytest to fail
|
||||
- fix issue172 duplicate call of pytest.fixture decoratored setup_module
|
||||
functions
|
||||
- fix junitxml=path construction so that if tests change the
|
||||
current working directory and the path is a relative path
|
||||
it is constructed correctly from the original current working dir.
|
||||
- fix "python setup.py test" example to cause a proper "errno" return
|
||||
- fix issue165 - fix broken doc links and mention stackoverflow for FAQ
|
||||
- catch unicode-issues when writing failure representations
|
||||
to terminal to prevent the whole session from crashing
|
||||
- fix xfail/skip confusion: a skip-mark or an imperative pytest.skip
|
||||
will now take precedence before xfail-markers because we
|
||||
can't determine xfail/xpass status in case of a skip. see also:
|
||||
http://stackoverflow.com/questions/11105828/in-py-test-when-i-explicitly-skip-a-test-that-is-marked-as-xfail-how-can-i-get
|
||||
|
||||
- always report installed 3rd party plugins in the header of a test run
|
||||
|
||||
- fix issue160: a failing setup of an xfail-marked tests should
|
||||
be reported as xfail (not xpass)
|
||||
|
||||
- fix issue128: show captured output when capsys/capfd are used
|
||||
|
||||
- fix issue179: propperly show the dependency chain of factories
|
||||
|
||||
- pluginmanager.register(...) now raises ValueError if the
|
||||
plugin has been already registered or the name is taken
|
||||
|
||||
- fix issue159: improve http://pytest.org/latest/faq.html
|
||||
especially with respect to the "magic" history, also mention
|
||||
pytest-django, trial and unittest integration.
|
||||
|
||||
- make request.keywords and node.keywords writable. All descendant
|
||||
collection nodes will see keyword values. Keywords are dictionaries
|
||||
containing markers and other info.
|
||||
|
||||
- fix issue 178: xml binary escapes are now wrapped in py.xml.raw
|
||||
|
||||
- fix issue 176: correctly catch the builtin AssertionError
|
||||
even when we replaced AssertionError with a subclass on the
|
||||
python level
|
||||
|
||||
- factory discovery no longer fails with magic global callables
|
||||
that provide no sane __code__ object (mock.call for example)
|
||||
|
||||
- fix issue 182: testdir.inprocess_run now considers passed plugins
|
||||
|
||||
- fix issue 188: ensure sys.exc_info is clear on python2
|
||||
before calling into a test
|
||||
|
||||
- fix issue 191: add unittest TestCase runTest method support
|
||||
- fix issue 156: monkeypatch correctly handles class level descriptors
|
||||
|
||||
- reporting refinements:
|
||||
|
||||
- pytest_report_header now receives a "startdir" so that
|
||||
you can use startdir.bestrelpath(yourpath) to show
|
||||
nice relative path
|
||||
|
||||
- allow plugins to implement both pytest_report_header and
|
||||
pytest_sessionstart (sessionstart is invoked first).
|
||||
|
||||
- don't show deselected reason line if there is none
|
||||
|
||||
- py.test -vv will show all of assert comparisations instead of truncating
|
||||
|
||||
Changes between 2.2.3 and 2.2.4
|
||||
-----------------------------------
|
||||
|
||||
- fix error message for rewritten assertions involving the % operator
|
||||
- fix issue 126: correctly match all invalid xml characters for junitxml
|
||||
binary escape
|
||||
- fix issue with unittest: now @unittest.expectedFailure markers should
|
||||
be processed correctly (you can also use @pytest.mark markers)
|
||||
- document integration with the extended distribute/setuptools test commands
|
||||
- fix issue 140: propperly get the real functions
|
||||
of bound classmethods for setup/teardown_class
|
||||
- fix issue #141: switch from the deceased paste.pocoo.org to bpaste.net
|
||||
- fix issue #143: call unconfigure/sessionfinish always when
|
||||
configure/sessionstart where called
|
||||
- fix issue #144: better mangle test ids to junitxml classnames
|
||||
- upgrade distribute_setup.py to 0.6.27
|
||||
|
||||
Changes between 2.2.2 and 2.2.3
|
||||
----------------------------------------
|
||||
|
||||
- fix uploaded package to only include neccesary files
|
||||
|
||||
Changes between 2.2.1 and 2.2.2
|
||||
----------------------------------------
|
||||
|
||||
- fix issue101: wrong args to unittest.TestCase test function now
|
||||
produce better output
|
||||
- fix issue102: report more useful errors and hints for when a
|
||||
test directory was renamed and some pyc/__pycache__ remain
|
||||
- fix issue106: allow parametrize to be applied multiple times
|
||||
e.g. from module, class and at function level.
|
||||
- fix issue107: actually perform session scope finalization
|
||||
- don't check in parametrize if indirect parameters are funcarg names
|
||||
- add chdir method to monkeypatch funcarg
|
||||
- fix crash resulting from calling monkeypatch undo a second time
|
||||
- fix issue115: make --collectonly robust against early failure
|
||||
(missing files/directories)
|
||||
- "-qq --collectonly" now shows only files and the number of tests in them
|
||||
- "-q --collectonly" now shows test ids
|
||||
- allow adding of attributes to test reports such that it also works
|
||||
with distributed testing (no upgrade of pytest-xdist needed)
|
||||
|
||||
Changes between 2.2.0 and 2.2.1
|
||||
----------------------------------------
|
||||
|
||||
- fix issue99 (in pytest and py) internallerrors with resultlog now
|
||||
produce better output - fixed by normalizing pytest_internalerror
|
||||
input arguments.
|
||||
- fix issue97 / traceback issues (in pytest and py) improve traceback output
|
||||
in conjunction with jinja2 and cython which hack tracebacks
|
||||
- fix issue93 (in pytest and pytest-xdist) avoid "delayed teardowns":
|
||||
the final test in a test node will now run its teardown directly
|
||||
instead of waiting for the end of the session. Thanks Dave Hunt for
|
||||
the good reporting and feedback. The pytest_runtest_protocol as well
|
||||
as the pytest_runtest_teardown hooks now have "nextitem" available
|
||||
which will be None indicating the end of the test run.
|
||||
- fix collection crash due to unknown-source collected items, thanks
|
||||
to Ralf Schmitt (fixed by depending on a more recent pylib)
|
||||
|
||||
Changes between 2.1.3 and 2.2.0
|
||||
----------------------------------------
|
||||
|
||||
- fix issue90: introduce eager tearing down of test items so that
|
||||
teardown function are called earlier.
|
||||
- add an all-powerful metafunc.parametrize function which allows to
|
||||
parametrize test function arguments in multiple steps and therefore
|
||||
from indepdenent plugins and palces.
|
||||
- add a @pytest.mark.parametrize helper which allows to easily
|
||||
call a test function with different argument values
|
||||
- Add examples to the "parametrize" example page, including a quick port
|
||||
of Test scenarios and the new parametrize function and decorator.
|
||||
- introduce registration for "pytest.mark.*" helpers via ini-files
|
||||
or through plugin hooks. Also introduce a "--strict" option which
|
||||
will treat unregistered markers as errors
|
||||
allowing to avoid typos and maintain a well described set of markers
|
||||
for your test suite. See exaples at http://pytest.org/latest/mark.html
|
||||
and its links.
|
||||
- issue50: introduce "-m marker" option to select tests based on markers
|
||||
(this is a stricter and more predictable version of '-k' in that "-m"
|
||||
only matches complete markers and has more obvious rules for and/or
|
||||
semantics.
|
||||
- new feature to help optimizing the speed of your tests:
|
||||
--durations=N option for displaying N slowest test calls
|
||||
and setup/teardown methods.
|
||||
- fix issue87: --pastebin now works with python3
|
||||
- fix issue89: --pdb with unexpected exceptions in doctest work more sensibly
|
||||
- fix and cleanup pytest's own test suite to not leak FDs
|
||||
- fix issue83: link to generated funcarg list
|
||||
- fix issue74: pyarg module names are now checked against imp.find_module false positives
|
||||
- fix compatibility with twisted/trial-11.1.0 use cases
|
||||
- simplify Node.listchain
|
||||
- simplify junitxml output code by relying on py.xml
|
||||
- add support for skip properties on unittest classes and functions
|
||||
|
||||
Changes between 2.1.2 and 2.1.3
|
||||
----------------------------------------
|
||||
|
||||
- fix issue79: assertion rewriting failed on some comparisons in boolops
|
||||
- correctly handle zero length arguments (a la pytest '')
|
||||
- fix issue67 / junitxml now contains correct test durations, thanks ronny
|
||||
- fix issue75 / skipping test failure on jython
|
||||
- fix issue77 / Allow assertrepr_compare hook to apply to a subset of tests
|
||||
|
||||
Changes between 2.1.1 and 2.1.2
|
||||
----------------------------------------
|
||||
|
||||
- fix assertion rewriting on files with windows newlines on some Python versions
|
||||
- refine test discovery by package/module name (--pyargs), thanks Florian Mayer
|
||||
- fix issue69 / assertion rewriting fixed on some boolean operations
|
||||
- fix issue68 / packages now work with assertion rewriting
|
||||
- fix issue66: use different assertion rewriting caches when the -O option is passed
|
||||
- don't try assertion rewriting on Jython, use reinterp
|
||||
|
||||
Changes between 2.1.0 and 2.1.1
|
||||
----------------------------------------------
|
||||
|
||||
- fix issue64 / pytest.set_trace now works within pytest_generate_tests hooks
|
||||
- fix issue60 / fix error conditions involving the creation of __pycache__
|
||||
- fix issue63 / assertion rewriting on inserts involving strings containing '%'
|
||||
- fix assertion rewriting on calls with a ** arg
|
||||
- don't cache rewritten modules if bytecode generation is disabled
|
||||
- fix assertion rewriting in read-only directories
|
||||
- fix issue59: provide system-out/err tags for junitxml output
|
||||
- fix issue61: assertion rewriting on boolean operations with 3 or more operands
|
||||
- you can now build a man page with "cd doc ; make man"
|
||||
|
||||
Changes between 2.0.3 and 2.1.0.DEV
|
||||
----------------------------------------------
|
||||
|
||||
|
||||
289
ISSUES.txt
289
ISSUES.txt
@@ -1,3 +1,125 @@
|
||||
|
||||
improve / add to dependency/test resource injection
|
||||
-------------------------------------------------------------
|
||||
tags: wish feature docs
|
||||
|
||||
write up better examples showing the connection between
|
||||
the two.
|
||||
|
||||
refine parametrize API
|
||||
-------------------------------------------------------------
|
||||
tags: critical feature
|
||||
|
||||
extend metafunc.parametrize to directly support indirection, example:
|
||||
|
||||
def setupdb(request, config):
|
||||
# setup "resource" based on test request and the values passed
|
||||
# in to parametrize. setupfunc is called for each such value.
|
||||
# you may use request.addfinalizer() or request.cached_setup ...
|
||||
return dynamic_setup_database(val)
|
||||
|
||||
@pytest.mark.parametrize("db", ["pg", "mysql"], setupfunc=setupdb)
|
||||
def test_heavy_functional_test(db):
|
||||
...
|
||||
|
||||
There would be no need to write or explain funcarg factories and
|
||||
their special __ syntax.
|
||||
|
||||
The examples and improvements should also show how to put the parametrize
|
||||
decorator to a class, to a module or even to a directory. For the directory
|
||||
part a conftest.py content like this::
|
||||
|
||||
pytestmark = [
|
||||
@pytest.mark.parametrize_setup("db", ...),
|
||||
]
|
||||
|
||||
probably makes sense in order to keep the declarative nature. This mirrors
|
||||
the marker-mechanism with respect to a test module but puts it to a directory
|
||||
scale.
|
||||
|
||||
When doing larger scoped parametrization it probably becomes neccessary
|
||||
to allow parametrization to be ignored if the according parameter is not
|
||||
used (currently any parametrized argument that is not present in a function will cause a ValueError). Example:
|
||||
|
||||
@pytest.mark.parametrize("db", ..., mustmatch=False)
|
||||
|
||||
means to not raise an error but simply ignore the parametrization
|
||||
if the signature of a decorated function does not match. XXX is it
|
||||
not sufficient to always allow non-matches?
|
||||
|
||||
|
||||
unify item/request classes, generalize items
|
||||
---------------------------------------------------------------
|
||||
tags: 2.4 wish
|
||||
|
||||
in lieu of extended parametrization and the new way to specify resource
|
||||
factories in terms of the parametrize decorator, consider unification
|
||||
of the item and request class. This also is connected with allowing
|
||||
funcargs in setup functions. Example of new item API:
|
||||
|
||||
item.getresource("db") # alias for request.getfuncargvalue
|
||||
item.addfinalizer(...)
|
||||
item.cached_setup(...)
|
||||
item.applymarker(...)
|
||||
|
||||
test classes/modules could then use this api via::
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
use item API ...
|
||||
|
||||
introduction of this new method needs to be _fully_ backward compatible -
|
||||
and the documentation needs to change along to mention this new way of
|
||||
doing things.
|
||||
|
||||
impl note: probably Request._fillfixtures would be called from the
|
||||
python plugins own pytest_runtest_setup(item) and would call
|
||||
item.getresource(X) for all X in the funcargs of a function.
|
||||
|
||||
XXX is it possible to even put the above item API to Nodes, i.e. also
|
||||
to Directorty/module/file/class collectors? Problem is that current
|
||||
funcarg factories presume they are called with a per-function (even
|
||||
per-funcarg-per-function) scope. Could there be small tweaks to the new
|
||||
API that lift this restriction?
|
||||
|
||||
consider::
|
||||
|
||||
def setup_class(cls, tmpdir):
|
||||
# would get a per-class tmpdir because tmpdir parametrization
|
||||
# would know that it is called with a class scope
|
||||
#
|
||||
#
|
||||
#
|
||||
this looks very difficult because those setup functions are also used
|
||||
by nose etc. Rather consider introduction of a new setup hook:
|
||||
|
||||
def setup_test(self, item):
|
||||
self.db = item.cached_setup(..., scope='class')
|
||||
self.tmpdir = item.getresource("tmpdir")
|
||||
|
||||
this should be compatible to unittest/nose and provide much of what
|
||||
"testresources" provide. XXX This would not allow full parametrization
|
||||
such that test function could be run multiple times with different
|
||||
values. See "parametrized attributes" issue.
|
||||
|
||||
allow parametrized attributes on classes
|
||||
--------------------------------------------------
|
||||
|
||||
tags: wish 2.4
|
||||
|
||||
example:
|
||||
|
||||
@pytest.mark.parametrize_attr("db", setupfunc, [1,2,3], scope="class")
|
||||
@pytest.mark.parametrize_attr("tmp", setupfunc, scope="...")
|
||||
class TestMe:
|
||||
def test_hello(self):
|
||||
access self.db ...
|
||||
|
||||
this would run the test_hello() function three times with three
|
||||
different values for self.db. This could also work with unittest/nose
|
||||
style tests, i.e. it leverages existing test suites without needing
|
||||
to rewrite them. Together with the previously mentioned setup_test()
|
||||
maybe the setupfunc could be ommitted?
|
||||
|
||||
checks / deprecations for next release
|
||||
---------------------------------------------------------------
|
||||
tags: bug 2.4 core xdist
|
||||
@@ -7,6 +129,13 @@ tags: bug 2.4 core xdist
|
||||
the protocol now - setup/teardown is called at module level.
|
||||
consider making calling of setup/teardown configurable
|
||||
|
||||
optimizations
|
||||
---------------------------------------------------------------
|
||||
tags: 2.4 core
|
||||
|
||||
- look at ihook optimization such that all lookups for
|
||||
hooks relating to the same fspath are cached.
|
||||
|
||||
fix start/finish partial finailization problem
|
||||
---------------------------------------------------------------
|
||||
tags: bug core
|
||||
@@ -18,21 +147,9 @@ appropriately to avoid this issue. Moreover/Alternatively, we could
|
||||
record which implementations of a hook succeeded and only call their
|
||||
teardown.
|
||||
|
||||
do early-teardown of test modules
|
||||
-----------------------------------------
|
||||
tags: feature 2.1
|
||||
|
||||
currently teardowns are called when the next tests is setup
|
||||
except for the function/method level where interally
|
||||
"teardown_exact" tears down immediately. Generalize
|
||||
this to perform the "neccessary" teardown compared to
|
||||
the "next" test item during teardown - this should
|
||||
get rid of some irritations because otherwise e.g.
|
||||
prints of teardown-code appear in the setup of the next test.
|
||||
|
||||
consider and document __init__ file usage in test directories
|
||||
---------------------------------------------------------------
|
||||
tags: bug 2.1 core
|
||||
tags: bug core
|
||||
|
||||
Currently, a test module is imported with its fully qualified
|
||||
package path, determined by checking __init__ files upwards.
|
||||
@@ -47,7 +164,7 @@ certain scenarios makes sense.
|
||||
|
||||
relax requirement to have tests/testing contain an __init__
|
||||
----------------------------------------------------------------
|
||||
tags: feature 2.1
|
||||
tags: feature
|
||||
bb: http://bitbucket.org/hpk42/py-trunk/issue/64
|
||||
|
||||
A local test run of a "tests" directory may work
|
||||
@@ -58,25 +175,24 @@ i.e. port the nose-logic of unloading a test module.
|
||||
|
||||
customize test function collection
|
||||
-------------------------------------------------------
|
||||
tags: feature 2.1
|
||||
tags: feature
|
||||
|
||||
- introduce py.test.mark.nocollect for not considering a function for
|
||||
test collection at all. maybe also introduce a py.test.mark.test to
|
||||
explicitely mark a function to become a tested one. Lookup JUnit ways
|
||||
of tagging tests.
|
||||
|
||||
- allow an easy way to customize "test_", "Test" prefixes for file paths
|
||||
and test function/class names. the current customizable Item requires
|
||||
too much code/concepts to influence this collection matching.
|
||||
maybe introduce pytest_pycollect_filters = {
|
||||
'file': 'test*.py',
|
||||
'function': 'test*',
|
||||
'class': 'Test*',
|
||||
}
|
||||
introduce pytest.mark.importorskip
|
||||
-------------------------------------------------------
|
||||
tags: feature
|
||||
|
||||
in addition to the imperative pytest.importorskip also introduce
|
||||
a pytest.mark.importorskip so that the test count is more correct.
|
||||
|
||||
|
||||
introduce py.test.mark.platform
|
||||
-------------------------------------------------------
|
||||
tags: feature 2.1
|
||||
tags: feature
|
||||
|
||||
Introduce nice-to-spell platform-skipping, examples:
|
||||
|
||||
@@ -93,44 +209,36 @@ interpreter versions.
|
||||
|
||||
pytest.mark.xfail signature change
|
||||
-------------------------------------------------------
|
||||
tags: feature 2.1
|
||||
tags: feature
|
||||
|
||||
change to pytest.mark.xfail(reason, (optional)condition)
|
||||
to better implement the word meaning. It also signals
|
||||
better that we always have some kind of an implementation
|
||||
reason that can be formualated.
|
||||
Compatibility? Maybe rename to "pytest.mark.xfail"?
|
||||
|
||||
introduce py.test.mark registration
|
||||
-----------------------------------------
|
||||
tags: feature 2.1
|
||||
|
||||
introduce a hook that allows to register a named mark decorator
|
||||
with documentation and add "py.test --marks" to get
|
||||
a list of available marks. Deprecate "dynamic" mark
|
||||
definitions.
|
||||
Compatibility? how to introduce a new name/keep compat?
|
||||
|
||||
allow to non-intrusively apply skipfs/xfail/marks
|
||||
---------------------------------------------------
|
||||
tags: feature 2.1
|
||||
tags: feature
|
||||
|
||||
use case: mark a module or directory structures
|
||||
to be skipped on certain platforms (i.e. no import
|
||||
attempt will be made).
|
||||
|
||||
consider introducing a hook/mechanism that allows to apply marks
|
||||
from conftests or plugins.
|
||||
from conftests or plugins. (See extended parametrization)
|
||||
|
||||
|
||||
explicit referencing of conftest.py files
|
||||
-----------------------------------------
|
||||
tags: feature 2.1
|
||||
tags: feature
|
||||
|
||||
allow to name conftest.py files (in sub directories) that should
|
||||
be imported early, as to include command line options.
|
||||
|
||||
improve central py.test ini file
|
||||
----------------------------------
|
||||
tags: feature 2.1
|
||||
tags: feature
|
||||
|
||||
introduce more declarative configuration options:
|
||||
- (to-be-collected test directories)
|
||||
@@ -141,49 +249,24 @@ introduce more declarative configuration options:
|
||||
|
||||
new documentation
|
||||
----------------------------------
|
||||
tags: feature 2.1
|
||||
tags: feature
|
||||
|
||||
- logo py.test
|
||||
- examples for unittest or functional testing
|
||||
- resource management for functional testing
|
||||
- patterns: page object
|
||||
- parametrized testing
|
||||
- better / more integrated plugin docs
|
||||
|
||||
generalize parametrized testing to generate combinations
|
||||
-------------------------------------------------------------
|
||||
tags: feature 2.1
|
||||
|
||||
think about extending metafunc.addcall or add a new method to allow to
|
||||
generate tests with combinations of all generated versions - what to do
|
||||
about "id" and "param" in such combinations though?
|
||||
|
||||
introduce py.test.mark.multi
|
||||
-----------------------------------------
|
||||
tags: feature 1.3
|
||||
|
||||
introduce py.test.mark.multi to specify a number
|
||||
of values for a given function argument.
|
||||
|
||||
have imported module mismatch honour relative paths
|
||||
--------------------------------------------------------
|
||||
tags: bug 2.1
|
||||
tags: bug
|
||||
|
||||
With 1.1.1 py.test fails at least on windows if an import
|
||||
is relative and compared against an absolute conftest.py
|
||||
path. Normalize.
|
||||
|
||||
call termination with small timeout
|
||||
-------------------------------------------------
|
||||
tags: feature 2.1
|
||||
test: testing/pytest/dist/test_dsession.py - test_terminate_on_hanging_node
|
||||
|
||||
Call gateway group termination with a small timeout if available.
|
||||
Should make dist-testing less likely to leave lost processes.
|
||||
|
||||
consider globals: py.test.ensuretemp and config
|
||||
--------------------------------------------------------------
|
||||
tags: experimental-wish 2.1
|
||||
tags: experimental-wish
|
||||
|
||||
consider deprecating py.test.ensuretemp and py.test.config
|
||||
to further reduce py.test globality. Also consider
|
||||
@@ -192,7 +275,7 @@ a plugin rather than being there from the start.
|
||||
|
||||
consider allowing funcargs for setup methods
|
||||
--------------------------------------------------------------
|
||||
tags: experimental-wish 2.1
|
||||
tags: experimental-wish
|
||||
|
||||
Users have expressed the wish to have funcargs available to setup
|
||||
functions. Experiment with allowing funcargs there - it might
|
||||
@@ -202,13 +285,20 @@ factories with a request object that not have a cls/function
|
||||
attributes. However, how to handle parametrized test functions
|
||||
and funcargs?
|
||||
|
||||
setup_function -> request can be like it is now
|
||||
setup_class -> request has no request.function
|
||||
setup_module -> request has no request.cls
|
||||
maybe introduce a setup method like:
|
||||
|
||||
setup_invocation(self, request)
|
||||
|
||||
which has full access to the test invocation through "request"
|
||||
through which you can get funcargvalues, use cached_setup etc.
|
||||
Therefore, the access to funcargs would be indirect but it
|
||||
could be consistently implemented. setup_invocation() would
|
||||
be a "glue" function for bringing together the xUnit and funcargs
|
||||
world.
|
||||
|
||||
consider pytest_addsyspath hook
|
||||
-----------------------------------------
|
||||
tags: 2.1
|
||||
tags:
|
||||
|
||||
py.test could call a new pytest_addsyspath() in order to systematically
|
||||
allow manipulation of sys.path and to inhibit it via --no-addsyspath
|
||||
@@ -220,7 +310,7 @@ and pytest_configure.
|
||||
|
||||
show plugin information in test header
|
||||
----------------------------------------------------------------
|
||||
tags: feature 2.1
|
||||
tags: feature
|
||||
|
||||
Now that external plugins are becoming more numerous
|
||||
it would be useful to have external plugins along with
|
||||
@@ -228,7 +318,7 @@ their versions displayed as a header line.
|
||||
|
||||
deprecate global py.test.config usage
|
||||
----------------------------------------------------------------
|
||||
tags: feature 2.1
|
||||
tags: feature
|
||||
|
||||
py.test.ensuretemp and py.test.config are probably the last
|
||||
objects containing global state. Often using them is not
|
||||
@@ -238,9 +328,58 @@ as others.
|
||||
|
||||
remove deprecated bits in collect.py
|
||||
-------------------------------------------------------------------
|
||||
tags: feature 2.1
|
||||
tags: feature
|
||||
|
||||
In an effort to further simplify code, review and remove deprecated bits
|
||||
in collect.py. Probably good:
|
||||
- inline consider_file/dir methods, no need to have them
|
||||
subclass-overridable because of hooks
|
||||
|
||||
implement fslayout decorator
|
||||
---------------------------------
|
||||
tags: feature
|
||||
|
||||
Improve the way how tests can work with pre-made examples,
|
||||
keeping the layout close to the test function:
|
||||
|
||||
@pytest.mark.fslayout("""
|
||||
conftest.py:
|
||||
# empty
|
||||
tests/
|
||||
test_%(NAME)s: # becomes test_run1.py
|
||||
def test_function(self):
|
||||
pass
|
||||
""")
|
||||
def test_run(pytester, fslayout):
|
||||
p = fslayout.findone("test_*.py")
|
||||
result = pytester.runpytest(p)
|
||||
assert result.ret == 0
|
||||
assert result.passed == 1
|
||||
|
||||
Another idea is to allow to define a full scenario including the run
|
||||
in one content string::
|
||||
|
||||
runscenario("""
|
||||
test_{TESTNAME}.py:
|
||||
import pytest
|
||||
@pytest.mark.xfail
|
||||
def test_that_fails():
|
||||
assert 0
|
||||
|
||||
@pytest.mark.skipif("True")
|
||||
def test_hello():
|
||||
pass
|
||||
|
||||
conftest.py:
|
||||
import pytest
|
||||
def pytest_runsetup_setup(item):
|
||||
pytest.skip("abc")
|
||||
|
||||
runpytest -rsxX
|
||||
*SKIP*{TESTNAME}*
|
||||
*1 skipped*
|
||||
""")
|
||||
|
||||
This could be run with at least three different ways to invoke pytest:
|
||||
through the shell, through "python -m pytest" and inlined. As inlined
|
||||
would be the fastest it could be run first (or "--fast" mode).
|
||||
|
||||
@@ -2,6 +2,7 @@ include CHANGELOG
|
||||
include README.txt
|
||||
include setup.py
|
||||
include distribute_setup.py
|
||||
include tox.ini
|
||||
include LICENSE
|
||||
graft doc
|
||||
graft testing
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
#
|
||||
__version__ = '2.1.0'
|
||||
__version__ = '2.3.2'
|
||||
|
||||
@@ -38,23 +38,19 @@ def pytest_configure(config):
|
||||
import ast
|
||||
except ImportError:
|
||||
mode = "reinterp"
|
||||
else:
|
||||
if sys.platform.startswith('java'):
|
||||
mode = "reinterp"
|
||||
if mode != "plain":
|
||||
_load_modules(mode)
|
||||
def callbinrepr(op, left, right):
|
||||
hook_result = config.hook.pytest_assertrepr_compare(
|
||||
config=config, op=op, left=left, right=right)
|
||||
for new_expl in hook_result:
|
||||
if new_expl:
|
||||
return '\n~'.join(new_expl)
|
||||
m = monkeypatch()
|
||||
config._cleanup.append(m.undo)
|
||||
m.setattr(py.builtin.builtins, 'AssertionError',
|
||||
reinterpret.AssertionError)
|
||||
m.setattr(util, '_reprcompare', callbinrepr)
|
||||
hook = None
|
||||
if mode == "rewrite":
|
||||
hook = rewrite.AssertionRewritingHook()
|
||||
sys.meta_path.append(hook)
|
||||
sys.meta_path.insert(0, hook)
|
||||
warn_about_missing_assertion(mode)
|
||||
config._assertstate = AssertionState(config, mode)
|
||||
config._assertstate.hook = hook
|
||||
@@ -73,6 +69,28 @@ def pytest_collection(session):
|
||||
if hook is not None:
|
||||
hook.set_session(session)
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
def callbinrepr(op, left, right):
|
||||
hook_result = item.ihook.pytest_assertrepr_compare(
|
||||
config=item.config, op=op, left=left, right=right)
|
||||
|
||||
for new_expl in hook_result:
|
||||
if new_expl:
|
||||
# Don't include pageloads of data unless we are very verbose (-vv)
|
||||
if len(''.join(new_expl[1:])) > 80*8 and item.config.option.verbose < 2:
|
||||
new_expl[1:] = ['Detailed information too verbose, truncated']
|
||||
res = '\n~'.join(new_expl)
|
||||
if item.config.getvalue("assertmode") == "rewrite":
|
||||
# The result will be fed back a python % formatting
|
||||
# operation, which will fail if there are extraneous
|
||||
# '%'s in the string. Escape them here.
|
||||
res = res.replace("%", "%%")
|
||||
return res
|
||||
util._reprcompare = callbinrepr
|
||||
|
||||
def pytest_runtest_teardown(item):
|
||||
util._reprcompare = None
|
||||
|
||||
def pytest_sessionfinish(session):
|
||||
hook = session.config._assertstate.hook
|
||||
if hook is not None:
|
||||
|
||||
@@ -53,7 +53,7 @@ def interpret(source, frame, should_fail=False):
|
||||
if should_fail:
|
||||
return ("(assertion failed, but when it was re-run for "
|
||||
"printing intermediate values, it did not fail. Suggestions: "
|
||||
"compute assert expression before the assert or use --no-assert)")
|
||||
"compute assert expression before the assert or use --assert=plain)")
|
||||
|
||||
def run(offending_line, frame=None):
|
||||
if frame is None:
|
||||
|
||||
@@ -1,8 +1,7 @@
|
||||
import py
|
||||
import sys, inspect
|
||||
from compiler import parse, ast, pycodegen
|
||||
from _pytest.assertion.util import format_explanation
|
||||
from _pytest.assertion.reinterpret import BuiltinAssertionError
|
||||
from _pytest.assertion.util import format_explanation, BuiltinAssertionError
|
||||
|
||||
passthroughex = py.builtin._sysex
|
||||
|
||||
@@ -482,7 +481,7 @@ def interpret(source, frame, should_fail=False):
|
||||
if should_fail:
|
||||
return ("(assertion failed, but when it was re-run for "
|
||||
"printing intermediate values, it did not fail. Suggestions: "
|
||||
"compute assert expression before the assert or use --nomagic)")
|
||||
"compute assert expression before the assert or use --assert=plain)")
|
||||
else:
|
||||
return None
|
||||
|
||||
@@ -527,10 +526,13 @@ if __name__ == '__main__':
|
||||
# example:
|
||||
def f():
|
||||
return 5
|
||||
|
||||
def g():
|
||||
return 3
|
||||
|
||||
def h(x):
|
||||
return 'never'
|
||||
|
||||
check("f() * g() == 5")
|
||||
check("not f()")
|
||||
check("not (f() and g() or 0)")
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
import sys
|
||||
import py
|
||||
|
||||
BuiltinAssertionError = py.builtin.builtins.AssertionError
|
||||
from _pytest.assertion.util import BuiltinAssertionError
|
||||
|
||||
class AssertionError(BuiltinAssertionError):
|
||||
def __init__(self, *args):
|
||||
@@ -45,4 +44,3 @@ if sys.version_info >= (2, 6) or (sys.platform.startswith("java")):
|
||||
from _pytest.assertion.newinterpret import interpret as reinterpret
|
||||
else:
|
||||
reinterpret = reinterpret_old
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
"""Rewrite assertion AST to produce nice error messages"""
|
||||
|
||||
import ast
|
||||
import collections
|
||||
import errno
|
||||
import itertools
|
||||
import imp
|
||||
import marshal
|
||||
@@ -14,6 +14,12 @@ import py
|
||||
from _pytest.assertion import util
|
||||
|
||||
|
||||
# Windows gives ENOENT in places *nix gives ENOTDIR.
|
||||
if sys.platform.startswith("win"):
|
||||
PATH_COMPONENT_NOT_DIR = errno.ENOENT
|
||||
else:
|
||||
PATH_COMPONENT_NOT_DIR = errno.ENOTDIR
|
||||
|
||||
# py.test caches rewritten pycs in __pycache__.
|
||||
if hasattr(imp, "get_tag"):
|
||||
PYTEST_TAG = imp.get_tag() + "-PYTEST"
|
||||
@@ -28,8 +34,13 @@ else:
|
||||
PYTEST_TAG = "%s-%s%s-PYTEST" % (impl, ver[0], ver[1])
|
||||
del ver, impl
|
||||
|
||||
PYC_EXT = ".py" + ("c" if __debug__ else "o")
|
||||
PYC_TAIL = "." + PYTEST_TAG + PYC_EXT
|
||||
|
||||
REWRITE_NEWLINES = sys.version_info[:2] != (2, 7) and sys.version_info < (3, 2)
|
||||
|
||||
class AssertionRewritingHook(object):
|
||||
"""Import hook which rewrites asserts."""
|
||||
"""PEP302 Import hook which rewrites asserts."""
|
||||
|
||||
def __init__(self):
|
||||
self.session = None
|
||||
@@ -44,6 +55,7 @@ class AssertionRewritingHook(object):
|
||||
return None
|
||||
sess = self.session
|
||||
state = sess.config._assertstate
|
||||
state.trace("find_module called for: %s" % name)
|
||||
names = name.rsplit(".", 1)
|
||||
lastname = names[-1]
|
||||
pth = None
|
||||
@@ -66,7 +78,7 @@ class AssertionRewritingHook(object):
|
||||
# Don't know what this is.
|
||||
return None
|
||||
else:
|
||||
fn = os.path.join(pth, name + ".py")
|
||||
fn = os.path.join(pth, name.rpartition(".")[2] + ".py")
|
||||
fn_pypath = py.path.local(fn)
|
||||
# Is this a test file?
|
||||
if not sess.isinitpath(fn):
|
||||
@@ -76,11 +88,15 @@ class AssertionRewritingHook(object):
|
||||
try:
|
||||
for pat in self.fnpats:
|
||||
if fn_pypath.fnmatch(pat):
|
||||
state.trace("matched test file %r" % (fn,))
|
||||
break
|
||||
else:
|
||||
return None
|
||||
finally:
|
||||
self.session = sess
|
||||
else:
|
||||
state.trace("matched test file (was specified on cmdline): %r" %
|
||||
(fn,))
|
||||
# The requested module looks like a test file, so rewrite it. This is
|
||||
# the most magical part of the process: load the source, rewrite the
|
||||
# asserts, and load the rewritten source. We also cache the rewritten
|
||||
@@ -89,17 +105,40 @@ class AssertionRewritingHook(object):
|
||||
# tricky race conditions, we maintain the following invariant: The
|
||||
# cached pyc is always a complete, valid pyc. Operations on it must be
|
||||
# atomic. POSIX's atomic rename comes in handy.
|
||||
write = not sys.dont_write_bytecode
|
||||
cache_dir = os.path.join(fn_pypath.dirname, "__pycache__")
|
||||
py.path.local(cache_dir).ensure(dir=True)
|
||||
cache_name = fn_pypath.basename[:-3] + "." + PYTEST_TAG + ".pyc"
|
||||
if write:
|
||||
try:
|
||||
os.mkdir(cache_dir)
|
||||
except OSError:
|
||||
e = sys.exc_info()[1].errno
|
||||
if e == errno.EEXIST:
|
||||
# Either the __pycache__ directory already exists (the
|
||||
# common case) or it's blocked by a non-dir node. In the
|
||||
# latter case, we'll ignore it in _write_pyc.
|
||||
pass
|
||||
elif e == PATH_COMPONENT_NOT_DIR:
|
||||
# One of the path components was not a directory, likely
|
||||
# because we're in a zip file.
|
||||
write = False
|
||||
elif e == errno.EACCES:
|
||||
state.trace("read only directory: %r" % fn_pypath.dirname)
|
||||
write = False
|
||||
else:
|
||||
raise
|
||||
cache_name = fn_pypath.basename[:-3] + PYC_TAIL
|
||||
pyc = os.path.join(cache_dir, cache_name)
|
||||
# Notice that even if we're in a read-only directory, I'm going
|
||||
# to check for a cached pyc. This may not be optimal...
|
||||
co = _read_pyc(fn_pypath, pyc)
|
||||
if co is None:
|
||||
state.trace("rewriting %r" % (fn,))
|
||||
co = _make_rewritten_pyc(state, fn_pypath, pyc)
|
||||
co = _rewrite_test(state, fn_pypath)
|
||||
if co is None:
|
||||
# Probably a SyntaxError in the module.
|
||||
# Probably a SyntaxError in the test.
|
||||
return None
|
||||
if write:
|
||||
_make_rewritten_pyc(state, fn_pypath, pyc, co)
|
||||
else:
|
||||
state.trace("found cached rewritten pyc for %r" % (fn,))
|
||||
self.modules[name] = co, pyc
|
||||
@@ -122,29 +161,42 @@ class AssertionRewritingHook(object):
|
||||
return sys.modules[name]
|
||||
|
||||
def _write_pyc(co, source_path, pyc):
|
||||
# Technically, we don't have to have the same pyc format as (C)Python, since
|
||||
# these "pycs" should never be seen by builtin import. However, there's
|
||||
# little reason deviate, and I hope sometime to be able to use
|
||||
# imp.load_compiled to load them. (See the comment in load_module above.)
|
||||
# Technically, we don't have to have the same pyc format as
|
||||
# (C)Python, since these "pycs" should never be seen by builtin
|
||||
# import. However, there's little reason deviate, and I hope
|
||||
# sometime to be able to use imp.load_compiled to load them. (See
|
||||
# the comment in load_module above.)
|
||||
mtime = int(source_path.mtime())
|
||||
fp = open(pyc, "wb")
|
||||
try:
|
||||
fp = open(pyc, "wb")
|
||||
except IOError:
|
||||
err = sys.exc_info()[1].errno
|
||||
if err == PATH_COMPONENT_NOT_DIR:
|
||||
# This happens when we get a EEXIST in find_module creating the
|
||||
# __pycache__ directory and __pycache__ is by some non-dir node.
|
||||
return False
|
||||
raise
|
||||
try:
|
||||
fp.write(imp.get_magic())
|
||||
fp.write(struct.pack("<l", mtime))
|
||||
marshal.dump(co, fp)
|
||||
finally:
|
||||
fp.close()
|
||||
return True
|
||||
|
||||
def _make_rewritten_pyc(state, fn, pyc):
|
||||
"""Try to rewrite *fn* and dump the rewritten code to *pyc*.
|
||||
RN = "\r\n".encode("utf-8")
|
||||
N = "\n".encode("utf-8")
|
||||
|
||||
Return the code object of the rewritten module on success. Return None if
|
||||
there are problems parsing or compiling the module.
|
||||
"""
|
||||
def _rewrite_test(state, fn):
|
||||
"""Try to read and rewrite *fn* and return the code object."""
|
||||
try:
|
||||
source = fn.read("rb")
|
||||
except EnvironmentError:
|
||||
return None
|
||||
# On Python versions which are not 2.7 and less than or equal to 3.1, the
|
||||
# parser expects *nix newlines.
|
||||
if REWRITE_NEWLINES:
|
||||
source = source.replace(RN, N) + N
|
||||
try:
|
||||
tree = ast.parse(source)
|
||||
except SyntaxError:
|
||||
@@ -159,6 +211,10 @@ def _make_rewritten_pyc(state, fn, pyc):
|
||||
# assertion rewriting, but I don't know of a fast way to tell.
|
||||
state.trace("failed to compile: %r" % (fn,))
|
||||
return None
|
||||
return co
|
||||
|
||||
def _make_rewritten_pyc(state, fn, pyc, co):
|
||||
"""Try to dump rewritten code to *pyc*."""
|
||||
if sys.platform.startswith("win"):
|
||||
# Windows grants exclusive access to open files and doesn't have atomic
|
||||
# rename, so just write into the final file.
|
||||
@@ -167,9 +223,8 @@ def _make_rewritten_pyc(state, fn, pyc):
|
||||
# When not on windows, assume rename is atomic. Dump the code object
|
||||
# into a file specific to this process and atomically replace it.
|
||||
proc_pyc = pyc + "." + str(os.getpid())
|
||||
_write_pyc(co, fn, proc_pyc)
|
||||
os.rename(proc_pyc, pyc)
|
||||
return co
|
||||
if _write_pyc(co, fn, proc_pyc):
|
||||
os.rename(proc_pyc, pyc)
|
||||
|
||||
def _read_pyc(source, pyc):
|
||||
"""Possibly read a py.test pyc containing rewritten code.
|
||||
@@ -187,9 +242,8 @@ def _read_pyc(source, pyc):
|
||||
except EnvironmentError:
|
||||
return None
|
||||
# Check for invalid or out of date pyc file.
|
||||
if (len(data) != 8 or
|
||||
data[:4] != imp.get_magic() or
|
||||
struct.unpack("<l", data[4:])[0] != mtime):
|
||||
if (len(data) != 8 or data[:4] != imp.get_magic() or
|
||||
struct.unpack("<l", data[4:])[0] != mtime):
|
||||
return None
|
||||
co = marshal.load(fp)
|
||||
if not isinstance(co, types.CodeType):
|
||||
@@ -227,35 +281,35 @@ def _call_reprcompare(ops, results, expls, each_obj):
|
||||
|
||||
|
||||
unary_map = {
|
||||
ast.Not : "not %s",
|
||||
ast.Invert : "~%s",
|
||||
ast.USub : "-%s",
|
||||
ast.UAdd : "+%s"
|
||||
ast.Not: "not %s",
|
||||
ast.Invert: "~%s",
|
||||
ast.USub: "-%s",
|
||||
ast.UAdd: "+%s"
|
||||
}
|
||||
|
||||
binop_map = {
|
||||
ast.BitOr : "|",
|
||||
ast.BitXor : "^",
|
||||
ast.BitAnd : "&",
|
||||
ast.LShift : "<<",
|
||||
ast.RShift : ">>",
|
||||
ast.Add : "+",
|
||||
ast.Sub : "-",
|
||||
ast.Mult : "*",
|
||||
ast.Div : "/",
|
||||
ast.FloorDiv : "//",
|
||||
ast.Mod : "%",
|
||||
ast.Eq : "==",
|
||||
ast.NotEq : "!=",
|
||||
ast.Lt : "<",
|
||||
ast.LtE : "<=",
|
||||
ast.Gt : ">",
|
||||
ast.GtE : ">=",
|
||||
ast.Pow : "**",
|
||||
ast.Is : "is",
|
||||
ast.IsNot : "is not",
|
||||
ast.In : "in",
|
||||
ast.NotIn : "not in"
|
||||
ast.BitOr: "|",
|
||||
ast.BitXor: "^",
|
||||
ast.BitAnd: "&",
|
||||
ast.LShift: "<<",
|
||||
ast.RShift: ">>",
|
||||
ast.Add: "+",
|
||||
ast.Sub: "-",
|
||||
ast.Mult: "*",
|
||||
ast.Div: "/",
|
||||
ast.FloorDiv: "//",
|
||||
ast.Mod: "%%", # escaped for string formatting
|
||||
ast.Eq: "==",
|
||||
ast.NotEq: "!=",
|
||||
ast.Lt: "<",
|
||||
ast.LtE: "<=",
|
||||
ast.Gt: ">",
|
||||
ast.GtE: ">=",
|
||||
ast.Pow: "**",
|
||||
ast.Is: "is",
|
||||
ast.IsNot: "is not",
|
||||
ast.In: "in",
|
||||
ast.NotIn: "not in"
|
||||
}
|
||||
|
||||
|
||||
@@ -288,7 +342,7 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
lineno = 0
|
||||
for item in mod.body:
|
||||
if (expect_docstring and isinstance(item, ast.Expr) and
|
||||
isinstance(item.value, ast.Str)):
|
||||
isinstance(item.value, ast.Str)):
|
||||
doc = item.value.s
|
||||
if "PYTEST_DONT_REWRITE" in doc:
|
||||
# The module has disabled assertion rewriting.
|
||||
@@ -329,7 +383,7 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
"""Get a new variable."""
|
||||
# Use a character invalid in python identifiers to avoid clashing.
|
||||
name = "@py_assert" + str(next(self.variable_counter))
|
||||
self.variables[self.cond_chain].add(name)
|
||||
self.variables.append(name)
|
||||
return name
|
||||
|
||||
def assign(self, expr):
|
||||
@@ -358,13 +412,6 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
self.explanation_specifiers[specifier] = expr
|
||||
return "%(" + specifier + ")s"
|
||||
|
||||
def enter_cond(self, cond, body):
|
||||
self.statements.append(ast.If(cond, body, []))
|
||||
self.cond_chain += cond,
|
||||
|
||||
def leave_cond(self, n=1):
|
||||
self.cond_chain = self.cond_chain[:-n]
|
||||
|
||||
def push_format_context(self):
|
||||
self.explanation_specifiers = {}
|
||||
self.stack.append(self.explanation_specifiers)
|
||||
@@ -392,7 +439,7 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
return [assert_]
|
||||
self.statements = []
|
||||
self.cond_chain = ()
|
||||
self.variables = collections.defaultdict(set)
|
||||
self.variables = []
|
||||
self.variable_counter = itertools.count()
|
||||
self.stack = []
|
||||
self.on_failure = []
|
||||
@@ -414,22 +461,12 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
else:
|
||||
raise_ = ast.Raise(exc, None, None)
|
||||
body.append(raise_)
|
||||
# Delete temporary variables. This requires a bit cleverness about the
|
||||
# order, so we don't delete variables that are themselves conditions for
|
||||
# later variables.
|
||||
for chain in sorted(self.variables, key=len, reverse=True):
|
||||
if chain:
|
||||
where = []
|
||||
if len(chain) > 1:
|
||||
cond = ast.Boolop(ast.And(), chain)
|
||||
else:
|
||||
cond = chain[0]
|
||||
self.statements.append(ast.If(cond, where, []))
|
||||
else:
|
||||
where = self.statements
|
||||
v = self.variables[chain]
|
||||
names = [ast.Name(name, ast.Del()) for name in v]
|
||||
where.append(ast.Delete(names))
|
||||
# Clear temporary variables by setting them to None.
|
||||
if self.variables:
|
||||
variables = [ast.Name(name, ast.Store())
|
||||
for name in self.variables]
|
||||
clear = ast.Assign(variables, ast.Name("None", ast.Load()))
|
||||
self.statements.append(clear)
|
||||
# Fix line numbers.
|
||||
for stmt in self.statements:
|
||||
set_location(stmt, assert_.lineno, assert_.col_offset)
|
||||
@@ -448,26 +485,32 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
res_var = self.variable()
|
||||
expl_list = self.assign(ast.List([], ast.Load()))
|
||||
app = ast.Attribute(expl_list, "append", ast.Load())
|
||||
is_or = isinstance(boolop.op, ast.Or)
|
||||
is_or = int(isinstance(boolop.op, ast.Or))
|
||||
body = save = self.statements
|
||||
fail_save = self.on_failure
|
||||
levels = len(boolop.values) - 1
|
||||
self.push_format_context()
|
||||
# Process each operand, short-circuting if needed.
|
||||
for i, v in enumerate(boolop.values):
|
||||
if i:
|
||||
fail_inner = []
|
||||
self.on_failure.append(ast.If(cond, fail_inner, []))
|
||||
self.on_failure = fail_inner
|
||||
self.push_format_context()
|
||||
res, expl = self.visit(v)
|
||||
body.append(ast.Assign([ast.Name(res_var, ast.Store())], res))
|
||||
call = ast.Call(app, [ast.Str(expl)], [], None, None)
|
||||
body.append(ast.Expr(call))
|
||||
expl_format = self.pop_format_context(ast.Str(expl))
|
||||
call = ast.Call(app, [expl_format], [], None, None)
|
||||
self.on_failure.append(ast.Expr(call))
|
||||
if i < levels:
|
||||
inner = []
|
||||
cond = res
|
||||
if is_or:
|
||||
cond = ast.UnaryOp(ast.Not(), cond)
|
||||
self.enter_cond(cond, inner)
|
||||
inner = []
|
||||
self.statements.append(ast.If(cond, inner, []))
|
||||
self.statements = body = inner
|
||||
# Leave all conditions.
|
||||
self.leave_cond(levels)
|
||||
self.statements = save
|
||||
self.on_failure = fail_save
|
||||
expl_template = self.helper("format_boolop", expl_list, ast.Num(is_or))
|
||||
expl = self.pop_format_context(expl_template)
|
||||
return ast.Name(res_var, ast.Load()), self.explanation_param(expl)
|
||||
@@ -504,10 +547,11 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
new_star, expl = self.visit(call.starargs)
|
||||
arg_expls.append("*" + expl)
|
||||
if call.kwargs:
|
||||
new_kwarg, expl = self.visit(call.kwarg)
|
||||
new_kwarg, expl = self.visit(call.kwargs)
|
||||
arg_expls.append("**" + expl)
|
||||
expl = "%s(%s)" % (func_expl, ', '.join(arg_expls))
|
||||
new_call = ast.Call(new_func, new_args, new_kwargs, new_star, new_kwarg)
|
||||
new_call = ast.Call(new_func, new_args, new_kwargs,
|
||||
new_star, new_kwarg)
|
||||
res = self.assign(new_call)
|
||||
res_expl = self.explanation_param(self.display(res))
|
||||
outer_expl = "%s\n{%s = %s\n}" % (res_expl, res_expl, expl)
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
import py
|
||||
|
||||
BuiltinAssertionError = py.builtin.builtins.AssertionError
|
||||
|
||||
# The _reprcompare attribute on the util module is used by the new assertion
|
||||
# interpretation code and assertion rewriter to detect this plugin was
|
||||
@@ -115,16 +116,11 @@ def assertrepr_compare(op, left, right):
|
||||
excinfo = py.code.ExceptionInfo()
|
||||
explanation = ['(pytest_assertion plugin: representation of '
|
||||
'details failed. Probably an object has a faulty __repr__.)',
|
||||
str(excinfo)
|
||||
]
|
||||
|
||||
str(excinfo)]
|
||||
|
||||
if not explanation:
|
||||
return None
|
||||
|
||||
# Don't include pageloads of data, should be configurable
|
||||
if len(''.join(explanation)) > 80*8:
|
||||
explanation = ['Detailed information too verbose, truncated']
|
||||
|
||||
return [summary] + explanation
|
||||
|
||||
|
||||
@@ -11,22 +11,22 @@ def pytest_addoption(parser):
|
||||
group._addoption('-s', action="store_const", const="no", dest="capture",
|
||||
help="shortcut for --capture=no.")
|
||||
|
||||
@pytest.mark.tryfirst
|
||||
def pytest_cmdline_parse(pluginmanager, args):
|
||||
# we want to perform capturing already for plugin/conftest loading
|
||||
if '-s' in args or "--capture=no" in args:
|
||||
method = "no"
|
||||
elif hasattr(os, 'dup') and '--capture=sys' not in args:
|
||||
method = "fd"
|
||||
else:
|
||||
method = "sys"
|
||||
capman = CaptureManager(method)
|
||||
pluginmanager.register(capman, "capturemanager")
|
||||
|
||||
def addouterr(rep, outerr):
|
||||
repr = getattr(rep, 'longrepr', None)
|
||||
if not hasattr(repr, 'addsection'):
|
||||
return
|
||||
for secname, content in zip(["out", "err"], outerr):
|
||||
if content:
|
||||
repr.addsection("Captured std%s" % secname, content.rstrip())
|
||||
|
||||
def pytest_unconfigure(config):
|
||||
# registered in config.py during early conftest.py loading
|
||||
capman = config.pluginmanager.getplugin('capturemanager')
|
||||
while capman._method2capture:
|
||||
name, cap = capman._method2capture.popitem()
|
||||
# XXX logging module may wants to close it itself on process exit
|
||||
# otherwise we could do finalization here and call "reset()".
|
||||
cap.suspend()
|
||||
rep.sections.append(("Captured std%s" % secname, content))
|
||||
|
||||
class NoCapture:
|
||||
def startall(self):
|
||||
@@ -39,8 +39,9 @@ class NoCapture:
|
||||
return "", ""
|
||||
|
||||
class CaptureManager:
|
||||
def __init__(self):
|
||||
def __init__(self, defaultmethod=None):
|
||||
self._method2capture = {}
|
||||
self._defaultmethod = defaultmethod
|
||||
|
||||
def _maketempfile(self):
|
||||
f = py.std.tempfile.TemporaryFile()
|
||||
@@ -65,14 +66,6 @@ class CaptureManager:
|
||||
else:
|
||||
raise ValueError("unknown capturing method: %r" % method)
|
||||
|
||||
def _getmethod_preoptionparse(self, args):
|
||||
if '-s' in args or "--capture=no" in args:
|
||||
return "no"
|
||||
elif hasattr(os, 'dup') and '--capture=sys' not in args:
|
||||
return "fd"
|
||||
else:
|
||||
return "sys"
|
||||
|
||||
def _getmethod(self, config, fspath):
|
||||
if config.option.capture:
|
||||
method = config.option.capture
|
||||
@@ -85,16 +78,22 @@ class CaptureManager:
|
||||
method = "sys"
|
||||
return method
|
||||
|
||||
def reset_capturings(self):
|
||||
for name, cap in self._method2capture.items():
|
||||
cap.reset()
|
||||
|
||||
def resumecapture_item(self, item):
|
||||
method = self._getmethod(item.config, item.fspath)
|
||||
if not hasattr(item, 'outerr'):
|
||||
item.outerr = ('', '') # we accumulate outerr on the item
|
||||
return self.resumecapture(method)
|
||||
|
||||
def resumecapture(self, method):
|
||||
def resumecapture(self, method=None):
|
||||
if hasattr(self, '_capturing'):
|
||||
raise ValueError("cannot resume, already capturing with %r" %
|
||||
(self._capturing,))
|
||||
if method is None:
|
||||
method = self._defaultmethod
|
||||
cap = self._method2capture.get(method)
|
||||
self._capturing = method
|
||||
if cap is None:
|
||||
@@ -120,22 +119,20 @@ class CaptureManager:
|
||||
return "", ""
|
||||
|
||||
def activate_funcargs(self, pyfuncitem):
|
||||
if not hasattr(pyfuncitem, 'funcargs'):
|
||||
return
|
||||
assert not hasattr(self, '_capturing_funcargs')
|
||||
self._capturing_funcargs = capturing_funcargs = []
|
||||
for name, capfuncarg in pyfuncitem.funcargs.items():
|
||||
if name in ('capsys', 'capfd'):
|
||||
capturing_funcargs.append(capfuncarg)
|
||||
capfuncarg._start()
|
||||
funcargs = getattr(pyfuncitem, "funcargs", None)
|
||||
if funcargs is not None:
|
||||
for name, capfuncarg in funcargs.items():
|
||||
if name in ('capsys', 'capfd'):
|
||||
assert not hasattr(self, '_capturing_funcarg')
|
||||
self._capturing_funcarg = capfuncarg
|
||||
capfuncarg._start()
|
||||
|
||||
def deactivate_funcargs(self):
|
||||
capturing_funcargs = getattr(self, '_capturing_funcargs', None)
|
||||
if capturing_funcargs is not None:
|
||||
while capturing_funcargs:
|
||||
capfuncarg = capturing_funcargs.pop()
|
||||
capfuncarg._finalize()
|
||||
del self._capturing_funcargs
|
||||
capturing_funcarg = getattr(self, '_capturing_funcarg', None)
|
||||
if capturing_funcarg:
|
||||
outerr = capturing_funcarg._finalize()
|
||||
del self._capturing_funcarg
|
||||
return outerr
|
||||
|
||||
def pytest_make_collect_report(self, __multicall__, collector):
|
||||
method = self._getmethod(collector.config, collector.fspath)
|
||||
@@ -164,26 +161,18 @@ class CaptureManager:
|
||||
def pytest_runtest_teardown(self, item):
|
||||
self.resumecapture_item(item)
|
||||
|
||||
def pytest__teardown_final(self, __multicall__, session):
|
||||
method = self._getmethod(session.config, None)
|
||||
self.resumecapture(method)
|
||||
try:
|
||||
rep = __multicall__.execute()
|
||||
finally:
|
||||
outerr = self.suspendcapture()
|
||||
if rep:
|
||||
addouterr(rep, outerr)
|
||||
return rep
|
||||
|
||||
def pytest_keyboard_interrupt(self, excinfo):
|
||||
if hasattr(self, '_capturing'):
|
||||
self.suspendcapture()
|
||||
|
||||
@pytest.mark.tryfirst
|
||||
def pytest_runtest_makereport(self, __multicall__, item, call):
|
||||
self.deactivate_funcargs()
|
||||
funcarg_outerr = self.deactivate_funcargs()
|
||||
rep = __multicall__.execute()
|
||||
outerr = self.suspendcapture(item)
|
||||
if funcarg_outerr is not None:
|
||||
outerr = (outerr[0] + funcarg_outerr[0],
|
||||
outerr[1] + funcarg_outerr[1])
|
||||
if not rep.passed:
|
||||
addouterr(rep, outerr)
|
||||
if not rep.passed or rep.when == "teardown":
|
||||
@@ -191,23 +180,29 @@ class CaptureManager:
|
||||
item.outerr = outerr
|
||||
return rep
|
||||
|
||||
error_capsysfderror = "cannot use capsys and capfd at the same time"
|
||||
|
||||
def pytest_funcarg__capsys(request):
|
||||
"""enables capturing of writes to sys.stdout/sys.stderr and makes
|
||||
captured output available via ``capsys.readouterr()`` method calls
|
||||
which return a ``(out, err)`` tuple.
|
||||
"""
|
||||
return CaptureFuncarg(py.io.StdCapture)
|
||||
if "capfd" in request._funcargs:
|
||||
raise request.raiseerror(error_capsysfderror)
|
||||
return CaptureFixture(py.io.StdCapture)
|
||||
|
||||
def pytest_funcarg__capfd(request):
|
||||
"""enables capturing of writes to file descriptors 1 and 2 and makes
|
||||
captured output available via ``capsys.readouterr()`` method calls
|
||||
which return a ``(out, err)`` tuple.
|
||||
"""
|
||||
if "capsys" in request._funcargs:
|
||||
request.raiseerror(error_capsysfderror)
|
||||
if not hasattr(os, 'dup'):
|
||||
py.test.skip("capfd funcarg needs os.dup")
|
||||
return CaptureFuncarg(py.io.StdCaptureFD)
|
||||
pytest.skip("capfd funcarg needs os.dup")
|
||||
return CaptureFixture(py.io.StdCaptureFD)
|
||||
|
||||
class CaptureFuncarg:
|
||||
class CaptureFixture:
|
||||
def __init__(self, captureclass):
|
||||
self.capture = captureclass(now=False)
|
||||
|
||||
@@ -216,8 +211,9 @@ class CaptureFuncarg:
|
||||
|
||||
def _finalize(self):
|
||||
if hasattr(self, 'capture'):
|
||||
self.capture.reset()
|
||||
outerr = self.capture.reset()
|
||||
del self.capture
|
||||
return outerr
|
||||
|
||||
def readouterr(self):
|
||||
return self.capture.readouterr()
|
||||
|
||||
@@ -8,16 +8,18 @@ import pytest
|
||||
def pytest_cmdline_parse(pluginmanager, args):
|
||||
config = Config(pluginmanager)
|
||||
config.parse(args)
|
||||
if config.option.debug:
|
||||
config.trace.root.setwriter(sys.stderr.write)
|
||||
return config
|
||||
|
||||
def pytest_unconfigure(config):
|
||||
for func in config._cleanup:
|
||||
func()
|
||||
while 1:
|
||||
try:
|
||||
fin = config._cleanup.pop()
|
||||
except IndexError:
|
||||
break
|
||||
fin()
|
||||
|
||||
class Parser:
|
||||
""" Parser for command line arguments. """
|
||||
""" Parser for command line arguments and ini-file values. """
|
||||
|
||||
def __init__(self, usage=None, processopt=None):
|
||||
self._anonymous = OptionGroup("custom options", parser=self)
|
||||
@@ -33,15 +35,17 @@ class Parser:
|
||||
if option.dest:
|
||||
self._processopt(option)
|
||||
|
||||
def addnote(self, note):
|
||||
self._notes.append(note)
|
||||
|
||||
def getgroup(self, name, description="", after=None):
|
||||
""" get (or create) a named option Group.
|
||||
|
||||
:name: unique name of the option group.
|
||||
:name: name of the option group.
|
||||
:description: long description for --help output.
|
||||
:after: name of other group, used for ordering --help output.
|
||||
|
||||
The returned group object has an ``addoption`` method with the same
|
||||
signature as :py:func:`parser.addoption
|
||||
<_pytest.config.Parser.addoption>` but will be shown in the
|
||||
respective group in the output of ``pytest. --help``.
|
||||
"""
|
||||
for group in self._groups:
|
||||
if group.name == name:
|
||||
@@ -55,7 +59,19 @@ class Parser:
|
||||
return group
|
||||
|
||||
def addoption(self, *opts, **attrs):
|
||||
""" add an optparse-style option. """
|
||||
""" register a command line option.
|
||||
|
||||
:opts: option names, can be short or long options.
|
||||
:attrs: same attributes which the ``add_option()`` function of the
|
||||
`optparse library
|
||||
<http://docs.python.org/library/optparse.html#module-optparse>`_
|
||||
accepts.
|
||||
|
||||
After command line parsing options are available on the pytest config
|
||||
object via ``config.option.NAME`` where ``NAME`` is usually set
|
||||
by passing a ``dest`` attribute, for example
|
||||
``addoption("--long", dest="NAME", ...)``.
|
||||
"""
|
||||
self._anonymous.addoption(*opts, **attrs)
|
||||
|
||||
def parse(self, args):
|
||||
@@ -76,11 +92,20 @@ class Parser:
|
||||
return args
|
||||
|
||||
def addini(self, name, help, type=None, default=None):
|
||||
""" add an ini-file option with the given name and description. """
|
||||
""" register an ini-file option.
|
||||
|
||||
:name: name of the ini-variable
|
||||
:type: type of the variable, can be ``pathlist``, ``args`` or ``linelist``.
|
||||
:default: default value if no ini-file option exists but is queried.
|
||||
|
||||
The value of ini-variables can be retrieved via a call to
|
||||
:py:func:`config.getini(name) <_pytest.config.Config.getini>`.
|
||||
"""
|
||||
assert type in (None, "pathlist", "args", "linelist")
|
||||
self._inidict[name] = (help, type, default)
|
||||
self._ininames.append(name)
|
||||
|
||||
|
||||
class OptionGroup:
|
||||
def __init__(self, name, description="", parser=None):
|
||||
self.name = name
|
||||
@@ -151,20 +176,24 @@ class Conftest(object):
|
||||
p = current.join(opt1[len(opt)+1:], abs=1)
|
||||
self._confcutdir = p
|
||||
break
|
||||
for arg in args + [current]:
|
||||
foundanchor = False
|
||||
for arg in args:
|
||||
if hasattr(arg, 'startswith') and arg.startswith("--"):
|
||||
continue
|
||||
anchor = current.join(arg, abs=1)
|
||||
if anchor.check(): # we found some file object
|
||||
self._path2confmods[None] = self.getconftestmodules(anchor)
|
||||
# let's also consider test* dirs
|
||||
if anchor.check(dir=1):
|
||||
for x in anchor.listdir("test*"):
|
||||
if x.check(dir=1):
|
||||
self.getconftestmodules(x)
|
||||
break
|
||||
else:
|
||||
assert 0, "no root of filesystem?"
|
||||
self._try_load_conftest(anchor)
|
||||
foundanchor = True
|
||||
if not foundanchor:
|
||||
self._try_load_conftest(current)
|
||||
|
||||
def _try_load_conftest(self, anchor):
|
||||
self._path2confmods[None] = self.getconftestmodules(anchor)
|
||||
# let's also consider test* subdirs
|
||||
if anchor.check(dir=1):
|
||||
for x in anchor.listdir("test*"):
|
||||
if x.check(dir=1):
|
||||
self.getconftestmodules(x)
|
||||
|
||||
def getconftestmodules(self, path):
|
||||
""" return a list of imported conftest modules for the given path. """
|
||||
@@ -242,8 +271,9 @@ class CmdOptions(object):
|
||||
class Config(object):
|
||||
""" access to configuration values, pluginmanager and plugin hooks. """
|
||||
def __init__(self, pluginmanager=None):
|
||||
#: command line option values, usually added via parser.addoption(...)
|
||||
#: or parser.getgroup(...).addoption(...) calls
|
||||
#: command line option values, which must have been previously added
|
||||
#: via calls like ``parser.addoption(...)`` or
|
||||
#: ``parser.getgroup(groupname).addoption(...)``
|
||||
self.option = CmdOptions()
|
||||
self._parser = Parser(
|
||||
usage="usage: %prog [options] [file_or_dir] [file_or_dir] [...]",
|
||||
@@ -256,11 +286,14 @@ class Config(object):
|
||||
self.hook = self.pluginmanager.hook
|
||||
self._inicache = {}
|
||||
self._cleanup = []
|
||||
|
||||
|
||||
@classmethod
|
||||
def fromdictargs(cls, option_dict, args):
|
||||
""" constructor useable for subprocesses. """
|
||||
config = cls()
|
||||
# XXX slightly crude way to initialize capturing
|
||||
import _pytest.capture
|
||||
_pytest.capture.pytest_cmdline_parse(config.pluginmanager, args)
|
||||
config._preparse(args, addopts=False)
|
||||
config.option.__dict__.update(option_dict)
|
||||
for x in config.option.plugins:
|
||||
@@ -285,11 +318,10 @@ class Config(object):
|
||||
|
||||
def _setinitialconftest(self, args):
|
||||
# capture output during conftest init (#issue93)
|
||||
from _pytest.capture import CaptureManager
|
||||
capman = CaptureManager()
|
||||
self.pluginmanager.register(capman, 'capturemanager')
|
||||
# will be unregistered in capture.py's unconfigure()
|
||||
capman.resumecapture(capman._getmethod_preoptionparse(args))
|
||||
# XXX introduce load_conftest hook to avoid needing to know
|
||||
# about capturing plugin here
|
||||
capman = self.pluginmanager.getplugin("capturemanager")
|
||||
capman.resumecapture()
|
||||
try:
|
||||
try:
|
||||
self._conftest.setinitial(args)
|
||||
@@ -334,6 +366,7 @@ class Config(object):
|
||||
# Note that this can only be called once per testing process.
|
||||
assert not hasattr(self, 'args'), (
|
||||
"can only parse cmdline args at most once per Config object")
|
||||
self._origargs = args
|
||||
self._preparse(args)
|
||||
self._parser.hints.extend(self.pluginmanager._hints)
|
||||
args = self._parser.parse_setoption(args, self.option)
|
||||
@@ -341,6 +374,14 @@ class Config(object):
|
||||
args.append(py.std.os.getcwd())
|
||||
self.args = args
|
||||
|
||||
def addinivalue_line(self, name, line):
|
||||
""" add a line to an ini-file option. The option must have been
|
||||
declared but might not yet be set in which case the line becomes the
|
||||
the first line in its value. """
|
||||
x = self.getini(name)
|
||||
assert isinstance(x, list)
|
||||
x.append(line) # modifies the cached list inline
|
||||
|
||||
def getini(self, name):
|
||||
""" return configuration value from an ini file. If the
|
||||
specified name hasn't been registered through a prior ``parse.addini``
|
||||
@@ -422,7 +463,7 @@ class Config(object):
|
||||
|
||||
|
||||
def getcfg(args, inibasenames):
|
||||
args = [x for x in args if str(x)[0] != "-"]
|
||||
args = [x for x in args if not str(x).startswith("-")]
|
||||
if not args:
|
||||
args = [py.path.local()]
|
||||
for arg in args:
|
||||
|
||||
@@ -16,11 +16,10 @@ default_plugins = (
|
||||
"junitxml resultlog doctest").split()
|
||||
|
||||
class TagTracer:
|
||||
def __init__(self, prefix="[pytest] "):
|
||||
def __init__(self):
|
||||
self._tag2proc = {}
|
||||
self.writer = None
|
||||
self.indent = 0
|
||||
self.prefix = prefix
|
||||
|
||||
def get(self, name):
|
||||
return TagTracerSub(self, (name,))
|
||||
@@ -30,7 +29,7 @@ class TagTracer:
|
||||
if args:
|
||||
indent = " " * self.indent
|
||||
content = " ".join(map(str, args))
|
||||
self.writer("%s%s%s\n" %(self.prefix, indent, content))
|
||||
self.writer("%s%s [%s]\n" %(indent, content, ":".join(tags)))
|
||||
try:
|
||||
self._tag2proc[tags](tags, args)
|
||||
except KeyError:
|
||||
@@ -80,10 +79,11 @@ class PluginManager(object):
|
||||
self.import_plugin(spec)
|
||||
|
||||
def register(self, plugin, name=None, prepend=False):
|
||||
assert not self.isregistered(plugin), plugin
|
||||
if self._name2plugin.get(name, None) == -1:
|
||||
return
|
||||
name = name or getattr(plugin, '__name__', str(id(plugin)))
|
||||
if name in self._name2plugin:
|
||||
return False
|
||||
if self.isregistered(plugin, name):
|
||||
raise ValueError("Plugin already registered: %s=%s" %(name, plugin))
|
||||
#self.trace("registering", name, plugin)
|
||||
self._name2plugin[name] = plugin
|
||||
self.call_plugin(plugin, "pytest_addhooks", {'pluginmanager': self})
|
||||
@@ -212,6 +212,14 @@ class PluginManager(object):
|
||||
self.register(mod, modname)
|
||||
self.consider_module(mod)
|
||||
|
||||
def pytest_configure(self, config):
|
||||
config.addinivalue_line("markers",
|
||||
"tryfirst: mark a hook implementation function such that the "
|
||||
"plugin machinery will try to call it first/as early as possible.")
|
||||
config.addinivalue_line("markers",
|
||||
"trylast: mark a hook implementation function such that the "
|
||||
"plugin machinery will try to call it last/as late as possible.")
|
||||
|
||||
def pytest_plugin_registered(self, plugin):
|
||||
import pytest
|
||||
dic = self.call_plugin(plugin, "pytest_namespace", {}) or {}
|
||||
@@ -314,13 +322,15 @@ def importplugin(importspec):
|
||||
name = importspec
|
||||
try:
|
||||
mod = "_pytest." + name
|
||||
return __import__(mod, None, None, '__doc__')
|
||||
__import__(mod)
|
||||
return sys.modules[mod]
|
||||
except ImportError:
|
||||
#e = py.std.sys.exc_info()[1]
|
||||
#if str(e).find(name) == -1:
|
||||
# raise
|
||||
pass #
|
||||
return __import__(importspec, None, None, '__doc__')
|
||||
__import__(importspec)
|
||||
return sys.modules[importspec]
|
||||
|
||||
class MultiCall:
|
||||
""" execute a call into multiple python functions/methods. """
|
||||
@@ -432,10 +442,7 @@ _preinit = []
|
||||
def _preloadplugins():
|
||||
_preinit.append(PluginManager(load=True))
|
||||
|
||||
def main(args=None, plugins=None):
|
||||
""" returned exit code integer, after an in-process testing run
|
||||
with the given command line arguments, preloading an optional list
|
||||
of passed in plugin objects. """
|
||||
def _prepareconfig(args=None, plugins=None):
|
||||
if args is None:
|
||||
args = sys.argv[1:]
|
||||
elif isinstance(args, py.path.local):
|
||||
@@ -449,17 +456,22 @@ def main(args=None, plugins=None):
|
||||
else: # subsequent calls to main will create a fresh instance
|
||||
_pluginmanager = PluginManager(load=True)
|
||||
hook = _pluginmanager.hook
|
||||
try:
|
||||
if plugins:
|
||||
for plugin in plugins:
|
||||
_pluginmanager.register(plugin)
|
||||
config = hook.pytest_cmdline_parse(
|
||||
pluginmanager=_pluginmanager, args=args)
|
||||
exitstatus = hook.pytest_cmdline_main(config=config)
|
||||
except UsageError:
|
||||
e = sys.exc_info()[1]
|
||||
sys.stderr.write("ERROR: %s\n" %(e.args[0],))
|
||||
exitstatus = 3
|
||||
if plugins:
|
||||
for plugin in plugins:
|
||||
_pluginmanager.register(plugin)
|
||||
return hook.pytest_cmdline_parse(
|
||||
pluginmanager=_pluginmanager, args=args)
|
||||
|
||||
def main(args=None, plugins=None):
|
||||
""" return exit code, after performing an in-process test run.
|
||||
|
||||
:arg args: list of command line arguments.
|
||||
|
||||
:arg plugins: list of plugin objects to be auto-registered during
|
||||
initialization.
|
||||
"""
|
||||
config = _prepareconfig(args, plugins)
|
||||
exitstatus = config.hook.pytest_cmdline_main(config=config)
|
||||
return exitstatus
|
||||
|
||||
class UsageError(Exception):
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
""" version info, help messages, tracing configuration. """
|
||||
import py
|
||||
import pytest
|
||||
import inspect, sys
|
||||
import os, inspect, sys
|
||||
from _pytest.core import varnames
|
||||
|
||||
def pytest_addoption(parser):
|
||||
@@ -18,7 +18,29 @@ def pytest_addoption(parser):
|
||||
help="trace considerations of conftest.py files."),
|
||||
group.addoption('--debug',
|
||||
action="store_true", dest="debug", default=False,
|
||||
help="generate and show internal debugging information.")
|
||||
help="store internal tracing debug information in 'pytestdebug.log'.")
|
||||
|
||||
|
||||
def pytest_cmdline_parse(__multicall__):
|
||||
config = __multicall__.execute()
|
||||
if config.option.debug:
|
||||
path = os.path.abspath("pytestdebug.log")
|
||||
f = open(path, 'w')
|
||||
config._debugfile = f
|
||||
f.write("versions pytest-%s, py-%s, python-%s\ncwd=%s\nargs=%s\n\n" %(
|
||||
pytest.__version__, py.__version__, ".".join(map(str, sys.version_info)),
|
||||
os.getcwd(), config._origargs))
|
||||
config.trace.root.setwriter(f.write)
|
||||
sys.stderr.write("writing pytestdebug information to %s\n" % path)
|
||||
return config
|
||||
|
||||
@pytest.mark.trylast
|
||||
def pytest_unconfigure(config):
|
||||
if hasattr(config, '_debugfile'):
|
||||
config._debugfile.close()
|
||||
sys.stderr.write("wrote pytestdebug information to %s\n" %
|
||||
config._debugfile.name)
|
||||
config.trace.root.setwriter(None)
|
||||
|
||||
|
||||
def pytest_cmdline_main(config):
|
||||
@@ -34,6 +56,7 @@ def pytest_cmdline_main(config):
|
||||
elif config.option.help:
|
||||
config.pluginmanager.do_configure(config)
|
||||
showhelp(config)
|
||||
config.pluginmanager.do_unconfigure(config)
|
||||
return 0
|
||||
|
||||
def showhelp(config):
|
||||
@@ -56,6 +79,8 @@ def showhelp(config):
|
||||
|
||||
tw.line() ; tw.line()
|
||||
#tw.sep("=")
|
||||
tw.line("to see available markers type: py.test --markers")
|
||||
tw.line("to see available fixtures type: py.test --fixtures")
|
||||
return
|
||||
|
||||
tw.line("conftest.py options:")
|
||||
@@ -91,7 +116,7 @@ def pytest_report_header(config):
|
||||
verinfo = getpluginversioninfo(config)
|
||||
if verinfo:
|
||||
lines.extend(verinfo)
|
||||
|
||||
|
||||
if config.option.traceconfig:
|
||||
lines.append("active plugins:")
|
||||
plugins = []
|
||||
|
||||
@@ -23,8 +23,10 @@ def pytest_cmdline_preparse(config, args):
|
||||
"""modify command line arguments before option parsing. """
|
||||
|
||||
def pytest_addoption(parser):
|
||||
"""add optparse-style options and ini-style config values via calls
|
||||
to ``parser.addoption`` and ``parser.addini(...)``.
|
||||
"""use the parser to add optparse-style options and ini-style
|
||||
config values via calls, see :py:func:`parser.addoption(...)
|
||||
<_pytest.config.Parser.addoption>`
|
||||
and :py:func:`parser.addini(...) <_pytest.config.Parser.addini>`.
|
||||
"""
|
||||
|
||||
def pytest_cmdline_main(config):
|
||||
@@ -121,16 +123,23 @@ def pytest_generate_tests(metafunc):
|
||||
def pytest_itemstart(item, node=None):
|
||||
""" (deprecated, use pytest_runtest_logstart). """
|
||||
|
||||
def pytest_runtest_protocol(item):
|
||||
""" implements the standard runtest_setup/call/teardown protocol including
|
||||
capturing exceptions and calling reporting hooks on the results accordingly.
|
||||
def pytest_runtest_protocol(item, nextitem):
|
||||
""" implements the runtest_setup/call/teardown protocol for
|
||||
the given test item, including capturing exceptions and calling
|
||||
reporting hooks.
|
||||
|
||||
:arg item: test item for which the runtest protocol is performed.
|
||||
|
||||
:arg nexitem: the scheduled-to-be-next test item (or None if this
|
||||
is the end my friend). This argument is passed on to
|
||||
:py:func:`pytest_runtest_teardown`.
|
||||
|
||||
:return boolean: True if no further hook implementations should be invoked.
|
||||
"""
|
||||
pytest_runtest_protocol.firstresult = True
|
||||
|
||||
def pytest_runtest_logstart(nodeid, location):
|
||||
""" signal the start of a test run. """
|
||||
""" signal the start of running a single test item. """
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
""" called before ``pytest_runtest_call(item)``. """
|
||||
@@ -138,8 +147,14 @@ def pytest_runtest_setup(item):
|
||||
def pytest_runtest_call(item):
|
||||
""" called to execute the test ``item``. """
|
||||
|
||||
def pytest_runtest_teardown(item):
|
||||
""" called after ``pytest_runtest_call``. """
|
||||
def pytest_runtest_teardown(item, nextitem):
|
||||
""" called after ``pytest_runtest_call``.
|
||||
|
||||
:arg nexitem: the scheduled-to-be-next test item (None if no further
|
||||
test item is scheduled). This argument can be used to
|
||||
perform exact teardowns, i.e. calling just enough finalizers
|
||||
so that nextitem only needs to call setup-functions.
|
||||
"""
|
||||
|
||||
def pytest_runtest_makereport(item, call):
|
||||
""" return a :py:class:`_pytest.runner.TestReport` object
|
||||
@@ -149,15 +164,8 @@ def pytest_runtest_makereport(item, call):
|
||||
pytest_runtest_makereport.firstresult = True
|
||||
|
||||
def pytest_runtest_logreport(report):
|
||||
""" process item test report. """
|
||||
|
||||
# special handling for final teardown - somewhat internal for now
|
||||
def pytest__teardown_final(session):
|
||||
""" called before test session finishes. """
|
||||
pytest__teardown_final.firstresult = True
|
||||
|
||||
def pytest__teardown_final_logerror(report, session):
|
||||
""" called if runtest_teardown_final failed. """
|
||||
""" process a test setup/call/teardown report relating to
|
||||
the respective phase of executing a test. """
|
||||
|
||||
# -------------------------------------------------------------------------
|
||||
# test session related hooks
|
||||
@@ -187,7 +195,7 @@ def pytest_assertrepr_compare(config, op, left, right):
|
||||
# hooks for influencing reporting (invoked from _pytest_terminal)
|
||||
# -------------------------------------------------------------------------
|
||||
|
||||
def pytest_report_header(config):
|
||||
def pytest_report_header(config, startdir):
|
||||
""" return a string to be displayed as header info for terminal reporting."""
|
||||
|
||||
def pytest_report_teststatus(report):
|
||||
|
||||
254
_pytest/impl
Normal file
254
_pytest/impl
Normal file
@@ -0,0 +1,254 @@
|
||||
Sorting per-resource
|
||||
-----------------------------
|
||||
|
||||
for any given set of items:
|
||||
|
||||
- collect items per session-scoped parametrized funcarg
|
||||
- re-order until items no parametrizations are mixed
|
||||
|
||||
examples:
|
||||
|
||||
test()
|
||||
test1(s1)
|
||||
test1(s2)
|
||||
test2()
|
||||
test3(s1)
|
||||
test3(s2)
|
||||
|
||||
gets sorted to:
|
||||
|
||||
test()
|
||||
test2()
|
||||
test1(s1)
|
||||
test3(s1)
|
||||
test1(s2)
|
||||
test3(s2)
|
||||
|
||||
|
||||
the new @setup functions
|
||||
--------------------------------------
|
||||
|
||||
Consider a given @setup-marked function::
|
||||
|
||||
@pytest.mark.setup(maxscope=SCOPE)
|
||||
def mysetup(request, arg1, arg2, ...)
|
||||
...
|
||||
request.addfinalizer(fin)
|
||||
...
|
||||
|
||||
then FUNCARGSET denotes the set of (arg1, arg2, ...) funcargs and
|
||||
all of its dependent funcargs. The mysetup function will execute
|
||||
for any matching test item once per scope.
|
||||
|
||||
The scope is determined as the minimum scope of all scopes of the args
|
||||
in FUNCARGSET and the given "maxscope".
|
||||
|
||||
If mysetup has been called and no finalizers have been called it is
|
||||
called "active".
|
||||
|
||||
Furthermore the following rules apply:
|
||||
|
||||
- if an arg value in FUNCARGSET is about to be torn down, the
|
||||
mysetup-registered finalizers will execute as well.
|
||||
|
||||
- There will never be two active mysetup invocations.
|
||||
|
||||
Example 1, session scope::
|
||||
|
||||
@pytest.mark.funcarg(scope="session", params=[1,2])
|
||||
def db(request):
|
||||
request.addfinalizer(db_finalize)
|
||||
|
||||
@pytest.mark.setup
|
||||
def mysetup(request, db):
|
||||
request.addfinalizer(mysetup_finalize)
|
||||
...
|
||||
|
||||
And a given test module:
|
||||
|
||||
def test_something():
|
||||
...
|
||||
def test_otherthing():
|
||||
pass
|
||||
|
||||
Here is what happens::
|
||||
|
||||
db(request) executes with request.param == 1
|
||||
mysetup(request, db) executes
|
||||
test_something() executes
|
||||
test_otherthing() executes
|
||||
mysetup_finalize() executes
|
||||
db_finalize() executes
|
||||
db(request) executes with request.param == 2
|
||||
mysetup(request, db) executes
|
||||
test_something() executes
|
||||
test_otherthing() executes
|
||||
mysetup_finalize() executes
|
||||
db_finalize() executes
|
||||
|
||||
Example 2, session/function scope::
|
||||
|
||||
@pytest.mark.funcarg(scope="session", params=[1,2])
|
||||
def db(request):
|
||||
request.addfinalizer(db_finalize)
|
||||
|
||||
@pytest.mark.setup(scope="function")
|
||||
def mysetup(request, db):
|
||||
...
|
||||
request.addfinalizer(mysetup_finalize)
|
||||
...
|
||||
|
||||
And a given test module:
|
||||
|
||||
def test_something():
|
||||
...
|
||||
def test_otherthing():
|
||||
pass
|
||||
|
||||
Here is what happens::
|
||||
|
||||
db(request) executes with request.param == 1
|
||||
mysetup(request, db) executes
|
||||
test_something() executes
|
||||
mysetup_finalize() executes
|
||||
mysetup(request, db) executes
|
||||
test_otherthing() executes
|
||||
mysetup_finalize() executes
|
||||
db_finalize() executes
|
||||
db(request) executes with request.param == 2
|
||||
mysetup(request, db) executes
|
||||
test_something() executes
|
||||
mysetup_finalize() executes
|
||||
mysetup(request, db) executes
|
||||
test_otherthing() executes
|
||||
mysetup_finalize() executes
|
||||
db_finalize() executes
|
||||
|
||||
|
||||
Example 3 - funcargs session-mix
|
||||
----------------------------------------
|
||||
|
||||
Similar with funcargs, an example::
|
||||
|
||||
@pytest.mark.funcarg(scope="session", params=[1,2])
|
||||
def db(request):
|
||||
request.addfinalizer(db_finalize)
|
||||
|
||||
@pytest.mark.funcarg(scope="function")
|
||||
def table(request, db):
|
||||
...
|
||||
request.addfinalizer(table_finalize)
|
||||
...
|
||||
|
||||
And a given test module:
|
||||
|
||||
def test_something(table):
|
||||
...
|
||||
def test_otherthing(table):
|
||||
pass
|
||||
def test_thirdthing():
|
||||
pass
|
||||
|
||||
Here is what happens::
|
||||
|
||||
db(request) executes with param == 1
|
||||
table(request, db)
|
||||
test_something(table)
|
||||
table_finalize()
|
||||
table(request, db)
|
||||
test_otherthing(table)
|
||||
table_finalize()
|
||||
db_finalize
|
||||
db(request) executes with param == 2
|
||||
table(request, db)
|
||||
test_something(table)
|
||||
table_finalize()
|
||||
table(request, db)
|
||||
test_otherthing(table)
|
||||
table_finalize()
|
||||
db_finalize
|
||||
test_thirdthing()
|
||||
|
||||
Data structures
|
||||
--------------------
|
||||
|
||||
pytest internally maintains a dict of active funcargs with cache, param,
|
||||
finalizer, (scopeitem?) information:
|
||||
|
||||
active_funcargs = dict()
|
||||
|
||||
if a parametrized "db" is activated:
|
||||
|
||||
active_funcargs["db"] = FuncargInfo(dbvalue, paramindex,
|
||||
FuncargFinalize(...), scopeitem)
|
||||
|
||||
if a test is torn down and the next test requires a differently
|
||||
parametrized "db":
|
||||
|
||||
for argname in item.callspec.params:
|
||||
if argname in active_funcargs:
|
||||
funcarginfo = active_funcargs[argname]
|
||||
if funcarginfo.param != item.callspec.params[argname]:
|
||||
funcarginfo.callfinalizer()
|
||||
del node2funcarg[funcarginfo.scopeitem]
|
||||
del active_funcargs[argname]
|
||||
nodes_to_be_torn_down = ...
|
||||
for node in nodes_to_be_torn_down:
|
||||
if node in node2funcarg:
|
||||
argname = node2funcarg[node]
|
||||
active_funcargs[argname].callfinalizer()
|
||||
del node2funcarg[node]
|
||||
del active_funcargs[argname]
|
||||
|
||||
if a test is setup requiring a "db" funcarg:
|
||||
|
||||
if "db" in active_funcargs:
|
||||
return active_funcargs["db"][0]
|
||||
funcarginfo = setup_funcarg()
|
||||
active_funcargs["db"] = funcarginfo
|
||||
node2funcarg[funcarginfo.scopeitem] = "db"
|
||||
|
||||
Implementation plan for resources
|
||||
------------------------------------------
|
||||
|
||||
1. Revert FuncargRequest to the old form, unmerge item/request
|
||||
(done)
|
||||
2. make funcarg factories be discovered at collection time
|
||||
3. Introduce funcarg marker
|
||||
4. Introduce funcarg scope parameter
|
||||
5. Introduce funcarg parametrize parameter
|
||||
6. make setup functions be discovered at collection time
|
||||
7. (Introduce a pytest_fixture_protocol/setup_funcargs hook)
|
||||
|
||||
methods and data structures
|
||||
--------------------------------
|
||||
|
||||
A FuncarcManager holds all information about funcarg definitions
|
||||
including parametrization and scope definitions. It implements
|
||||
a pytest_generate_tests hook which performs parametrization as appropriate.
|
||||
|
||||
as a simple example, let's consider a tree where a test function requires
|
||||
a "abc" funcarg and its factory defines it as parametrized and scoped
|
||||
for Modules. When collections hits the function item, it creates
|
||||
the metafunc object, and calls funcargdb.pytest_generate_tests(metafunc)
|
||||
which looks up available funcarg factories and their scope and parametrization.
|
||||
This information is equivalent to what can be provided today directly
|
||||
at the function site and it should thus be relatively straight forward
|
||||
to implement the additional way of defining parametrization/scoping.
|
||||
|
||||
conftest loading:
|
||||
each funcarg-factory will populate the session.funcargmanager
|
||||
|
||||
When a test item is collected, it grows a dictionary
|
||||
(funcargname2factorycalllist). A factory lookup is performed
|
||||
for each required funcarg. The resulting factory call is stored
|
||||
with the item. If a function is parametrized multiple items are
|
||||
created with respective factory calls. Else if a factory is parametrized
|
||||
multiple items and calls to the factory function are created as well.
|
||||
|
||||
At setup time, an item populates a funcargs mapping, mapping names
|
||||
to values. If a value is funcarg factories are queried for a given item
|
||||
test functions and setup functions are put in a class
|
||||
which looks up required funcarg factories.
|
||||
|
||||
|
||||
@@ -25,21 +25,39 @@ except NameError:
|
||||
long = int
|
||||
|
||||
|
||||
class Junit(py.xml.Namespace):
|
||||
pass
|
||||
|
||||
|
||||
# We need to get the subset of the invalid unicode ranges according to
|
||||
# XML 1.0 which are valid in this python build. Hence we calculate
|
||||
# this dynamically instead of hardcoding it. The spec range of valid
|
||||
# chars is: Char ::= #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD]
|
||||
# | [#x10000-#x10FFFF]
|
||||
_illegal_unichrs = [(0x00, 0x08), (0x0B, 0x0C), (0x0E, 0x19),
|
||||
(0xD800, 0xDFFF), (0xFDD0, 0xFFFF)]
|
||||
_illegal_ranges = [unicode("%s-%s") % (unichr(low), unichr(high))
|
||||
for (low, high) in _illegal_unichrs
|
||||
_legal_chars = (0x09, 0x0A, 0x0d)
|
||||
_legal_ranges = (
|
||||
(0x20, 0xD7FF),
|
||||
(0xE000, 0xFFFD),
|
||||
(0x10000, 0x10FFFF),
|
||||
)
|
||||
_legal_xml_re = [unicode("%s-%s") % (unichr(low), unichr(high))
|
||||
for (low, high) in _legal_ranges
|
||||
if low < sys.maxunicode]
|
||||
illegal_xml_re = re.compile(unicode('[%s]') %
|
||||
unicode('').join(_illegal_ranges))
|
||||
del _illegal_unichrs
|
||||
del _illegal_ranges
|
||||
_legal_xml_re = [unichr(x) for x in _legal_chars] + _legal_xml_re
|
||||
illegal_xml_re = re.compile(unicode('[^%s]') %
|
||||
unicode('').join(_legal_xml_re))
|
||||
del _legal_chars
|
||||
del _legal_ranges
|
||||
del _legal_xml_re
|
||||
|
||||
def bin_xml_escape(arg):
|
||||
def repl(matchobj):
|
||||
i = ord(matchobj.group())
|
||||
if i <= 0xFF:
|
||||
return unicode('#x%02X') % i
|
||||
else:
|
||||
return unicode('#x%04X') % i
|
||||
return py.xml.raw(illegal_xml_re.sub(repl, py.xml.escape(arg)))
|
||||
|
||||
def pytest_addoption(parser):
|
||||
group = parser.getgroup("terminal reporting")
|
||||
@@ -63,122 +81,105 @@ def pytest_unconfigure(config):
|
||||
config.pluginmanager.unregister(xml)
|
||||
|
||||
|
||||
def mangle_testnames(names):
|
||||
names = [x.replace(".py", "") for x in names if x != '()']
|
||||
names[0] = names[0].replace("/", '.')
|
||||
return names
|
||||
|
||||
class LogXML(object):
|
||||
def __init__(self, logfile, prefix):
|
||||
logfile = os.path.expanduser(os.path.expandvars(logfile))
|
||||
self.logfile = os.path.normpath(logfile)
|
||||
self.logfile = os.path.normpath(os.path.abspath(logfile))
|
||||
self.prefix = prefix
|
||||
self.test_logs = []
|
||||
self.tests = []
|
||||
self.passed = self.skipped = 0
|
||||
self.failed = self.errors = 0
|
||||
self._durations = {}
|
||||
|
||||
def _opentestcase(self, report):
|
||||
names = report.nodeid.split("::")
|
||||
names[0] = names[0].replace("/", '.')
|
||||
names = tuple(names)
|
||||
d = {'time': self._durations.pop(report.nodeid, "0")}
|
||||
names = [x.replace(".py", "") for x in names if x != "()"]
|
||||
names = mangle_testnames(report.nodeid.split("::"))
|
||||
classnames = names[:-1]
|
||||
if self.prefix:
|
||||
classnames.insert(0, self.prefix)
|
||||
d['classname'] = ".".join(classnames)
|
||||
d['name'] = py.xml.escape(names[-1])
|
||||
attrs = ['%s="%s"' % item for item in sorted(d.items())]
|
||||
self.test_logs.append("\n<testcase %s>" % " ".join(attrs))
|
||||
self.tests.append(Junit.testcase(
|
||||
classname=".".join(classnames),
|
||||
name=names[-1],
|
||||
time=getattr(report, 'duration', 0)
|
||||
))
|
||||
|
||||
def _closetestcase(self):
|
||||
self.test_logs.append("</testcase>")
|
||||
|
||||
def appendlog(self, fmt, *args):
|
||||
def repl(matchobj):
|
||||
i = ord(matchobj.group())
|
||||
if i <= 0xFF:
|
||||
return unicode('#x%02X') % i
|
||||
else:
|
||||
return unicode('#x%04X') % i
|
||||
args = tuple([illegal_xml_re.sub(repl, py.xml.escape(arg))
|
||||
for arg in args])
|
||||
self.test_logs.append(fmt % args)
|
||||
def append(self, obj):
|
||||
self.tests[-1].append(obj)
|
||||
|
||||
def append_pass(self, report):
|
||||
self.passed += 1
|
||||
self._opentestcase(report)
|
||||
self._closetestcase()
|
||||
|
||||
def append_failure(self, report):
|
||||
self._opentestcase(report)
|
||||
#msg = str(report.longrepr.reprtraceback.extraline)
|
||||
if "xfail" in report.keywords:
|
||||
self.appendlog(
|
||||
'<skipped message="xfail-marked test passes unexpectedly"/>')
|
||||
if hasattr(report, "wasxfail"):
|
||||
self.append(
|
||||
Junit.skipped(message="xfail-marked test passes unexpectedly"))
|
||||
self.skipped += 1
|
||||
else:
|
||||
self.appendlog('<failure message="test failure">%s</failure>',
|
||||
report.longrepr)
|
||||
sec = dict(report.sections)
|
||||
fail = Junit.failure(message="test failure")
|
||||
fail.append(str(report.longrepr))
|
||||
self.append(fail)
|
||||
for name in ('out', 'err'):
|
||||
content = sec.get("Captured std%s" % name)
|
||||
if content:
|
||||
tag = getattr(Junit, 'system-'+name)
|
||||
self.append(tag(bin_xml_escape(content)))
|
||||
self.failed += 1
|
||||
self._closetestcase()
|
||||
|
||||
def append_collect_failure(self, report):
|
||||
self._opentestcase(report)
|
||||
#msg = str(report.longrepr.reprtraceback.extraline)
|
||||
self.appendlog('<failure message="collection failure">%s</failure>',
|
||||
report.longrepr)
|
||||
self._closetestcase()
|
||||
self.append(Junit.failure(str(report.longrepr),
|
||||
message="collection failure"))
|
||||
self.errors += 1
|
||||
|
||||
def append_collect_skipped(self, report):
|
||||
self._opentestcase(report)
|
||||
#msg = str(report.longrepr.reprtraceback.extraline)
|
||||
self.appendlog('<skipped message="collection skipped">%s</skipped>',
|
||||
report.longrepr)
|
||||
self._closetestcase()
|
||||
self.append(Junit.skipped(str(report.longrepr),
|
||||
message="collection skipped"))
|
||||
self.skipped += 1
|
||||
|
||||
def append_error(self, report):
|
||||
self._opentestcase(report)
|
||||
self.appendlog('<error message="test setup failure">%s</error>',
|
||||
report.longrepr)
|
||||
self._closetestcase()
|
||||
self.append(Junit.error(str(report.longrepr),
|
||||
message="test setup failure"))
|
||||
self.errors += 1
|
||||
|
||||
def append_skipped(self, report):
|
||||
self._opentestcase(report)
|
||||
if "xfail" in report.keywords:
|
||||
self.appendlog(
|
||||
'<skipped message="expected test failure">%s</skipped>',
|
||||
report.keywords['xfail'])
|
||||
if hasattr(report, "wasxfail"):
|
||||
self.append(Junit.skipped(str(report.wasxfail),
|
||||
message="expected test failure"))
|
||||
else:
|
||||
filename, lineno, skipreason = report.longrepr
|
||||
if skipreason.startswith("Skipped: "):
|
||||
skipreason = skipreason[9:]
|
||||
self.appendlog('<skipped type="pytest.skip" '
|
||||
'message="%s">%s</skipped>',
|
||||
skipreason, "%s:%s: %s" % report.longrepr,
|
||||
)
|
||||
self._closetestcase()
|
||||
self.append(
|
||||
Junit.skipped("%s:%s: %s" % report.longrepr,
|
||||
type="pytest.skip",
|
||||
message=skipreason
|
||||
))
|
||||
self.skipped += 1
|
||||
|
||||
def pytest_runtest_logreport(self, report):
|
||||
if report.passed:
|
||||
self.append_pass(report)
|
||||
if report.when == "call": # ignore setup/teardown
|
||||
self._opentestcase(report)
|
||||
self.append_pass(report)
|
||||
elif report.failed:
|
||||
self._opentestcase(report)
|
||||
if report.when != "call":
|
||||
self.append_error(report)
|
||||
else:
|
||||
self.append_failure(report)
|
||||
elif report.skipped:
|
||||
self._opentestcase(report)
|
||||
self.append_skipped(report)
|
||||
|
||||
def pytest_runtest_call(self, item, __multicall__):
|
||||
start = time.time()
|
||||
try:
|
||||
return __multicall__.execute()
|
||||
finally:
|
||||
self._durations[item.nodeid] = time.time() - start
|
||||
|
||||
def pytest_collectreport(self, report):
|
||||
if not report.passed:
|
||||
self._opentestcase(report)
|
||||
if report.failed:
|
||||
self.append_collect_failure(report)
|
||||
else:
|
||||
@@ -187,10 +188,11 @@ class LogXML(object):
|
||||
def pytest_internalerror(self, excrepr):
|
||||
self.errors += 1
|
||||
data = py.xml.escape(excrepr)
|
||||
self.test_logs.append(
|
||||
'\n<testcase classname="pytest" name="internal">'
|
||||
' <error message="internal error">'
|
||||
'%s</error></testcase>' % data)
|
||||
self.tests.append(
|
||||
Junit.testcase(
|
||||
Junit.error(data, message="internal error"),
|
||||
classname="pytest",
|
||||
name="internal"))
|
||||
|
||||
def pytest_sessionstart(self, session):
|
||||
self.suite_start_time = time.time()
|
||||
@@ -204,17 +206,17 @@ class LogXML(object):
|
||||
suite_stop_time = time.time()
|
||||
suite_time_delta = suite_stop_time - self.suite_start_time
|
||||
numtests = self.passed + self.failed
|
||||
|
||||
logfile.write('<?xml version="1.0" encoding="utf-8"?>')
|
||||
logfile.write('<testsuite ')
|
||||
logfile.write('name="" ')
|
||||
logfile.write('errors="%i" ' % self.errors)
|
||||
logfile.write('failures="%i" ' % self.failed)
|
||||
logfile.write('skips="%i" ' % self.skipped)
|
||||
logfile.write('tests="%i" ' % numtests)
|
||||
logfile.write('time="%.3f"' % suite_time_delta)
|
||||
logfile.write(' >')
|
||||
logfile.writelines(self.test_logs)
|
||||
logfile.write('</testsuite>')
|
||||
logfile.write(Junit.testsuite(
|
||||
self.tests,
|
||||
name="",
|
||||
errors=self.errors,
|
||||
failures=self.failed,
|
||||
skips=self.skipped,
|
||||
tests=numtests,
|
||||
time="%.3f" % suite_time_delta,
|
||||
).unicode(indent=0))
|
||||
logfile.close()
|
||||
|
||||
def pytest_terminal_summary(self, terminalreporter):
|
||||
|
||||
238
_pytest/main.py
238
_pytest/main.py
@@ -2,7 +2,15 @@
|
||||
|
||||
import py
|
||||
import pytest, _pytest
|
||||
import os, sys
|
||||
import inspect
|
||||
import os, sys, imp
|
||||
try:
|
||||
from collections import MutableMapping as MappingMixin
|
||||
except ImportError:
|
||||
from UserDict import DictMixin as MappingMixin
|
||||
|
||||
from _pytest.mark import MarkInfo
|
||||
|
||||
tracebackcutdir = py.path.local(_pytest.__file__).dirpath()
|
||||
|
||||
# exitcodes for the command line
|
||||
@@ -10,6 +18,9 @@ EXIT_OK = 0
|
||||
EXIT_TESTSFAILED = 1
|
||||
EXIT_INTERRUPTED = 2
|
||||
EXIT_INTERNALERROR = 3
|
||||
EXIT_USAGEERROR = 4
|
||||
|
||||
name_re = py.std.re.compile("^[a-zA-Z_]\w*$")
|
||||
|
||||
def pytest_addoption(parser):
|
||||
parser.addini("norecursedirs", "directory patterns to avoid for recursion",
|
||||
@@ -26,6 +37,8 @@ def pytest_addoption(parser):
|
||||
group._addoption('--maxfail', metavar="num",
|
||||
action="store", type="int", dest="maxfail", default=0,
|
||||
help="exit after first num failures or errors.")
|
||||
group._addoption('--strict', action="store_true",
|
||||
help="run pytest in strict mode, warnings become errors.")
|
||||
|
||||
group = parser.getgroup("collect", "collection")
|
||||
group.addoption('--collectonly',
|
||||
@@ -48,7 +61,7 @@ def pytest_addoption(parser):
|
||||
def pytest_namespace():
|
||||
collect = dict(Item=Item, Collector=Collector, File=File, Session=Session)
|
||||
return dict(collect=collect)
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
py.test.config = config # compatibiltiy
|
||||
if config.option.exitfirst:
|
||||
@@ -60,30 +73,34 @@ def wrap_session(config, doit):
|
||||
session.exitstatus = EXIT_OK
|
||||
initstate = 0
|
||||
try:
|
||||
config.pluginmanager.do_configure(config)
|
||||
initstate = 1
|
||||
config.hook.pytest_sessionstart(session=session)
|
||||
initstate = 2
|
||||
doit(config, session)
|
||||
except pytest.UsageError:
|
||||
raise
|
||||
except KeyboardInterrupt:
|
||||
excinfo = py.code.ExceptionInfo()
|
||||
config.hook.pytest_keyboard_interrupt(excinfo=excinfo)
|
||||
session.exitstatus = EXIT_INTERRUPTED
|
||||
except:
|
||||
excinfo = py.code.ExceptionInfo()
|
||||
config.pluginmanager.notify_exception(excinfo, config.option)
|
||||
session.exitstatus = EXIT_INTERNALERROR
|
||||
if excinfo.errisinstance(SystemExit):
|
||||
sys.stderr.write("mainloop: caught Spurious SystemExit!\n")
|
||||
if not session.exitstatus and session._testsfailed:
|
||||
session.exitstatus = EXIT_TESTSFAILED
|
||||
if initstate >= 2:
|
||||
config.hook.pytest_sessionfinish(session=session,
|
||||
exitstatus=session.exitstatus)
|
||||
if initstate >= 1:
|
||||
config.pluginmanager.do_unconfigure(config)
|
||||
try:
|
||||
config.pluginmanager.do_configure(config)
|
||||
initstate = 1
|
||||
config.hook.pytest_sessionstart(session=session)
|
||||
initstate = 2
|
||||
doit(config, session)
|
||||
except pytest.UsageError:
|
||||
msg = sys.exc_info()[1].args[0]
|
||||
sys.stderr.write("ERROR: %s\n" %(msg,))
|
||||
session.exitstatus = EXIT_USAGEERROR
|
||||
except KeyboardInterrupt:
|
||||
excinfo = py.code.ExceptionInfo()
|
||||
config.hook.pytest_keyboard_interrupt(excinfo=excinfo)
|
||||
session.exitstatus = EXIT_INTERRUPTED
|
||||
except:
|
||||
excinfo = py.code.ExceptionInfo()
|
||||
config.pluginmanager.notify_exception(excinfo, config.option)
|
||||
session.exitstatus = EXIT_INTERNALERROR
|
||||
if excinfo.errisinstance(SystemExit):
|
||||
sys.stderr.write("mainloop: caught Spurious SystemExit!\n")
|
||||
finally:
|
||||
if initstate >= 2:
|
||||
config.hook.pytest_sessionfinish(session=session,
|
||||
exitstatus=session.exitstatus or (session._testsfailed and 1))
|
||||
if not session.exitstatus and session._testsfailed:
|
||||
session.exitstatus = EXIT_TESTSFAILED
|
||||
if initstate >= 1:
|
||||
config.pluginmanager.do_unconfigure(config)
|
||||
return session.exitstatus
|
||||
|
||||
def pytest_cmdline_main(config):
|
||||
@@ -101,8 +118,19 @@ def pytest_collection(session):
|
||||
def pytest_runtestloop(session):
|
||||
if session.config.option.collectonly:
|
||||
return True
|
||||
for item in session.session.items:
|
||||
item.config.hook.pytest_runtest_protocol(item=item)
|
||||
|
||||
def getnextitem(i):
|
||||
# this is a function to avoid python2
|
||||
# keeping sys.exc_info set when calling into a test
|
||||
# python2 keeps sys.exc_info till the frame is left
|
||||
try:
|
||||
return session.items[i+1]
|
||||
except IndexError:
|
||||
return None
|
||||
|
||||
for i, item in enumerate(session.items):
|
||||
nextitem = getnextitem(i)
|
||||
item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
|
||||
if session.shouldstop:
|
||||
raise session.Interrupted(session.shouldstop)
|
||||
return True
|
||||
@@ -120,8 +148,10 @@ class HookProxy:
|
||||
def __init__(self, fspath, config):
|
||||
self.fspath = fspath
|
||||
self.config = config
|
||||
|
||||
def __getattr__(self, name):
|
||||
hookmethod = getattr(self.config.hook, name)
|
||||
|
||||
def call_matching_hooks(**kwargs):
|
||||
plugins = self.config._getmatchingplugins(self.fspath)
|
||||
return hookmethod.pcall(plugins, **kwargs)
|
||||
@@ -129,31 +159,71 @@ class HookProxy:
|
||||
|
||||
def compatproperty(name):
|
||||
def fget(self):
|
||||
# deprecated - use pytest.name
|
||||
return getattr(pytest, name)
|
||||
return property(fget, None, None,
|
||||
"deprecated attribute %r, use pytest.%s" % (name,name))
|
||||
|
||||
|
||||
return property(fget)
|
||||
|
||||
class NodeKeywords(MappingMixin):
|
||||
def __init__(self, node):
|
||||
parent = node.parent
|
||||
bases = parent and (parent.keywords._markers,) or ()
|
||||
self._markers = type("dynmarker", bases, {node.name: True})
|
||||
|
||||
def __getitem__(self, key):
|
||||
try:
|
||||
return getattr(self._markers, key)
|
||||
except AttributeError:
|
||||
raise KeyError(key)
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
setattr(self._markers, key, value)
|
||||
|
||||
def __delitem__(self, key):
|
||||
delattr(self._markers, key)
|
||||
|
||||
def __iter__(self):
|
||||
return iter(self.keys())
|
||||
|
||||
def __len__(self):
|
||||
return len(self.keys())
|
||||
|
||||
def keys(self):
|
||||
return dir(self._markers)
|
||||
|
||||
class Node(object):
|
||||
""" base class for all Nodes in the collection tree.
|
||||
""" base class for Collector and Item the test collection tree.
|
||||
Collector subclasses have children, Items are terminal nodes."""
|
||||
|
||||
def __init__(self, name, parent=None, config=None, session=None):
|
||||
#: a unique name with the scope of the parent
|
||||
#: a unique name within the scope of the parent node
|
||||
self.name = name
|
||||
|
||||
#: the parent collector node.
|
||||
self.parent = parent
|
||||
|
||||
#: the test config object
|
||||
|
||||
#: the pytest config object
|
||||
self.config = config or parent.config
|
||||
|
||||
#: the collection this node is part of
|
||||
#: the session this node is part of
|
||||
self.session = session or parent.session
|
||||
|
||||
#: filesystem path where this node was collected from
|
||||
|
||||
#: filesystem path where this node was collected from (can be None)
|
||||
self.fspath = getattr(parent, 'fspath', None)
|
||||
self.ihook = self.session.gethookproxy(self.fspath)
|
||||
self.keywords = {self.name: True}
|
||||
|
||||
#: keywords/markers collected from all scopes
|
||||
self.keywords = NodeKeywords(self)
|
||||
|
||||
#self.extrainit()
|
||||
|
||||
@property
|
||||
def ihook(self):
|
||||
""" fspath sensitive hook proxy used to call pytest hooks"""
|
||||
return self.session.gethookproxy(self.fspath)
|
||||
|
||||
#def extrainit(self):
|
||||
# """"extra initialization after Node is initialized. Implemented
|
||||
# by some subclasses. """
|
||||
|
||||
Module = compatproperty("Module")
|
||||
Class = compatproperty("Class")
|
||||
@@ -171,25 +241,28 @@ class Node(object):
|
||||
return cls
|
||||
|
||||
def __repr__(self):
|
||||
return "<%s %r>" %(self.__class__.__name__, getattr(self, 'name', None))
|
||||
return "<%s %r>" %(self.__class__.__name__,
|
||||
getattr(self, 'name', None))
|
||||
|
||||
# methods for ordering nodes
|
||||
@property
|
||||
def nodeid(self):
|
||||
""" a ::-separated string denoting its collection tree address. """
|
||||
try:
|
||||
return self._nodeid
|
||||
except AttributeError:
|
||||
self._nodeid = x = self._makeid()
|
||||
return x
|
||||
|
||||
|
||||
def _makeid(self):
|
||||
return self.parent.nodeid + "::" + self.name
|
||||
|
||||
def __eq__(self, other):
|
||||
if not isinstance(other, Node):
|
||||
return False
|
||||
return self.__class__ == other.__class__ and \
|
||||
self.name == other.name and self.parent == other.parent
|
||||
return (self.__class__ == other.__class__ and
|
||||
self.name == other.name and self.parent == other.parent)
|
||||
|
||||
def __ne__(self, other):
|
||||
return not self == other
|
||||
@@ -224,13 +297,13 @@ class Node(object):
|
||||
def listchain(self):
|
||||
""" return list of all parent collectors up to self,
|
||||
starting from root of collection tree. """
|
||||
l = [self]
|
||||
while 1:
|
||||
x = l[0]
|
||||
if x.parent is not None: # and x.parent.parent is not None:
|
||||
l.insert(0, x.parent)
|
||||
else:
|
||||
return l
|
||||
chain = []
|
||||
item = self
|
||||
while item is not None:
|
||||
chain.append(item)
|
||||
item = item.parent
|
||||
chain.reverse()
|
||||
return chain
|
||||
|
||||
def listnames(self):
|
||||
return [x.name for x in self.listchain()]
|
||||
@@ -248,6 +321,9 @@ class Node(object):
|
||||
pass
|
||||
|
||||
def _repr_failure_py(self, excinfo, style=None):
|
||||
fm = self.session._fixturemanager
|
||||
if excinfo.errisinstance(fm.FixtureLookupError):
|
||||
return excinfo.value.formatrepr()
|
||||
if self.config.option.fulltrace:
|
||||
style="long"
|
||||
else:
|
||||
@@ -325,6 +401,8 @@ class Item(Node):
|
||||
""" a basic test invocation item. Note that for a single function
|
||||
there might be multiple test invocation items.
|
||||
"""
|
||||
nextitem = None
|
||||
|
||||
def reportinfo(self):
|
||||
return self.fspath, None, ""
|
||||
|
||||
@@ -354,9 +432,9 @@ class Session(FSCollector):
|
||||
__module__ = 'builtins' # for py3
|
||||
|
||||
def __init__(self, config):
|
||||
super(Session, self).__init__(py.path.local(), parent=None,
|
||||
config=config, session=self)
|
||||
assert self.config.pluginmanager.register(self, name="session", prepend=True)
|
||||
FSCollector.__init__(self, py.path.local(), parent=None,
|
||||
config=config, session=self)
|
||||
self.config.pluginmanager.register(self, name="session", prepend=True)
|
||||
self._testsfailed = 0
|
||||
self.shouldstop = False
|
||||
self.trace = config.trace.root.get("collection")
|
||||
@@ -367,7 +445,7 @@ class Session(FSCollector):
|
||||
raise self.Interrupted(self.shouldstop)
|
||||
|
||||
def pytest_runtest_logreport(self, report):
|
||||
if report.failed and 'xfail' not in getattr(report, 'keywords', []):
|
||||
if report.failed and not hasattr(report, 'wasxfail'):
|
||||
self._testsfailed += 1
|
||||
maxfail = self.config.getvalue("maxfail")
|
||||
if maxfail and self._testsfailed >= maxfail:
|
||||
@@ -399,6 +477,7 @@ class Session(FSCollector):
|
||||
self._notfound = []
|
||||
self._initialpaths = set()
|
||||
self._initialparts = []
|
||||
self.items = items = []
|
||||
for arg in args:
|
||||
parts = self._parsearg(arg)
|
||||
self._initialparts.append(parts)
|
||||
@@ -414,7 +493,6 @@ class Session(FSCollector):
|
||||
if not genitems:
|
||||
return rep.result
|
||||
else:
|
||||
self.items = items = []
|
||||
if rep.passed:
|
||||
for node in rep.result:
|
||||
self.items.extend(self.genitems(node))
|
||||
@@ -442,7 +520,7 @@ class Session(FSCollector):
|
||||
if path.check(dir=1):
|
||||
assert not names, "invalid arg %r" %(arg,)
|
||||
for path in path.visit(fil=lambda x: x.check(file=1),
|
||||
rec=self._recurse, bf=True, sort=True):
|
||||
rec=self._recurse, bf=True, sort=True):
|
||||
for x in self._collectfile(path):
|
||||
yield x
|
||||
else:
|
||||
@@ -454,13 +532,13 @@ class Session(FSCollector):
|
||||
ihook = self.gethookproxy(path)
|
||||
if not self.isinitpath(path):
|
||||
if ihook.pytest_ignore_collect(path=path, config=self.config):
|
||||
return ()
|
||||
return ()
|
||||
return ihook.pytest_collect_file(path=path, parent=self)
|
||||
|
||||
def _recurse(self, path):
|
||||
ihook = self.gethookproxy(path.dirpath())
|
||||
if ihook.pytest_ignore_collect(path=path, config=self.config):
|
||||
return
|
||||
return
|
||||
for pat in self._norecursepatterns:
|
||||
if path.check(fnmatch=pat):
|
||||
return False
|
||||
@@ -469,16 +547,29 @@ class Session(FSCollector):
|
||||
return True
|
||||
|
||||
def _tryconvertpyarg(self, x):
|
||||
try:
|
||||
mod = __import__(x, None, None, ['__doc__'])
|
||||
except (ValueError, ImportError):
|
||||
return x
|
||||
p = py.path.local(mod.__file__)
|
||||
if p.purebasename == "__init__":
|
||||
p = p.dirpath()
|
||||
else:
|
||||
p = p.new(basename=p.purebasename+".py")
|
||||
return str(p)
|
||||
mod = None
|
||||
path = [os.path.abspath('.')] + sys.path
|
||||
for name in x.split('.'):
|
||||
# ignore anything that's not a proper name here
|
||||
# else something like --pyargs will mess up '.'
|
||||
# since imp.find_module will actually sometimes work for it
|
||||
# but it's supposed to be considered a filesystem path
|
||||
# not a package
|
||||
if name_re.match(name) is None:
|
||||
return x
|
||||
try:
|
||||
fd, mod, type_ = imp.find_module(name, path)
|
||||
except ImportError:
|
||||
return x
|
||||
else:
|
||||
if fd is not None:
|
||||
fd.close()
|
||||
|
||||
if type_[2] != imp.PKG_DIRECTORY:
|
||||
path = [os.path.dirname(mod)]
|
||||
else:
|
||||
path = [mod]
|
||||
return mod
|
||||
|
||||
def _parsearg(self, arg):
|
||||
""" return (fspath, names) tuple after checking the file exists. """
|
||||
@@ -496,7 +587,7 @@ class Session(FSCollector):
|
||||
raise pytest.UsageError(msg + arg)
|
||||
parts[0] = path
|
||||
return parts
|
||||
|
||||
|
||||
def matchnodes(self, matching, names):
|
||||
self.trace("matchnodes", matching, names)
|
||||
self.trace.root.indent += 1
|
||||
@@ -550,3 +641,12 @@ class Session(FSCollector):
|
||||
for x in self.genitems(subnode):
|
||||
yield x
|
||||
node.ihook.pytest_collectreport(report=rep)
|
||||
|
||||
def getfslineno(obj):
|
||||
# xxx let decorators etc specify a sane ordering
|
||||
if hasattr(obj, 'place_as'):
|
||||
obj = obj.place_as
|
||||
fslineno = py.code.getfslineno(obj)
|
||||
assert isinstance(fslineno[1], int), obj
|
||||
return fslineno
|
||||
|
||||
|
||||
112
_pytest/mark.py
112
_pytest/mark.py
@@ -14,12 +14,37 @@ def pytest_addoption(parser):
|
||||
"Terminate expression with ':' to make the first match match "
|
||||
"all subsequent tests (usually file-order). ")
|
||||
|
||||
group._addoption("-m",
|
||||
action="store", dest="markexpr", default="", metavar="MARKEXPR",
|
||||
help="only run tests matching given mark expression. "
|
||||
"example: -m 'mark1 and not mark2'."
|
||||
)
|
||||
|
||||
group.addoption("--markers", action="store_true", help=
|
||||
"show markers (builtin, plugin and per-project ones).")
|
||||
|
||||
parser.addini("markers", "markers for test functions", 'linelist')
|
||||
|
||||
def pytest_cmdline_main(config):
|
||||
if config.option.markers:
|
||||
config.pluginmanager.do_configure(config)
|
||||
tw = py.io.TerminalWriter()
|
||||
for line in config.getini("markers"):
|
||||
name, rest = line.split(":", 1)
|
||||
tw.write("@pytest.mark.%s:" % name, bold=True)
|
||||
tw.line(rest)
|
||||
tw.line()
|
||||
config.pluginmanager.do_unconfigure(config)
|
||||
return 0
|
||||
pytest_cmdline_main.tryfirst = True
|
||||
|
||||
def pytest_collection_modifyitems(items, config):
|
||||
keywordexpr = config.option.keyword
|
||||
if not keywordexpr:
|
||||
matchexpr = config.option.markexpr
|
||||
if not keywordexpr and not matchexpr:
|
||||
return
|
||||
selectuntil = False
|
||||
if keywordexpr[-1] == ":":
|
||||
if keywordexpr[-1:] == ":":
|
||||
selectuntil = True
|
||||
keywordexpr = keywordexpr[:-1]
|
||||
|
||||
@@ -29,22 +54,39 @@ def pytest_collection_modifyitems(items, config):
|
||||
if keywordexpr and skipbykeyword(colitem, keywordexpr):
|
||||
deselected.append(colitem)
|
||||
else:
|
||||
remaining.append(colitem)
|
||||
if selectuntil:
|
||||
keywordexpr = None
|
||||
if matchexpr:
|
||||
if not matchmark(colitem, matchexpr):
|
||||
deselected.append(colitem)
|
||||
continue
|
||||
remaining.append(colitem)
|
||||
|
||||
if deselected:
|
||||
config.hook.pytest_deselected(items=deselected)
|
||||
items[:] = remaining
|
||||
|
||||
class BoolDict:
|
||||
def __init__(self, mydict):
|
||||
self._mydict = mydict
|
||||
def __getitem__(self, name):
|
||||
return name in self._mydict
|
||||
|
||||
def matchmark(colitem, matchexpr):
|
||||
return eval(matchexpr, {}, BoolDict(colitem.obj.__dict__))
|
||||
|
||||
def pytest_configure(config):
|
||||
if config.option.strict:
|
||||
pytest.mark._config = config
|
||||
|
||||
def skipbykeyword(colitem, keywordexpr):
|
||||
""" return True if they given keyword expression means to
|
||||
skip this collector/item.
|
||||
"""
|
||||
if not keywordexpr:
|
||||
return
|
||||
|
||||
itemkeywords = getkeywords(colitem)
|
||||
|
||||
itemkeywords = colitem.keywords
|
||||
for key in filter(None, keywordexpr.split()):
|
||||
eor = key[:1] == '-'
|
||||
if eor:
|
||||
@@ -52,14 +94,6 @@ def skipbykeyword(colitem, keywordexpr):
|
||||
if not (eor ^ matchonekeyword(key, itemkeywords)):
|
||||
return True
|
||||
|
||||
def getkeywords(node):
|
||||
keywords = {}
|
||||
while node is not None:
|
||||
keywords.update(node.keywords)
|
||||
node = node.parent
|
||||
return keywords
|
||||
|
||||
|
||||
def matchonekeyword(key, itemkeywords):
|
||||
for elem in key.split("."):
|
||||
for kw in itemkeywords:
|
||||
@@ -77,15 +111,31 @@ class MarkGenerator:
|
||||
@py.test.mark.slowtest
|
||||
def test_function():
|
||||
pass
|
||||
|
||||
|
||||
will set a 'slowtest' :class:`MarkInfo` object
|
||||
on the ``test_function`` object. """
|
||||
|
||||
def __getattr__(self, name):
|
||||
if name[0] == "_":
|
||||
raise AttributeError(name)
|
||||
if hasattr(self, '_config'):
|
||||
self._check(name)
|
||||
return MarkDecorator(name)
|
||||
|
||||
def _check(self, name):
|
||||
try:
|
||||
if name in self._markers:
|
||||
return
|
||||
except AttributeError:
|
||||
pass
|
||||
self._markers = l = set()
|
||||
for line in self._config.getini("markers"):
|
||||
beginning = line.split(":", 1)
|
||||
x = beginning[0].split("(", 1)[0]
|
||||
l.add(x)
|
||||
if name not in self._markers:
|
||||
raise AttributeError("%r not a registered marker" % (name,))
|
||||
|
||||
class MarkDecorator:
|
||||
""" A decorator for test functions and test classes. When applied
|
||||
it will create :class:`MarkInfo` objects which may be
|
||||
@@ -133,8 +183,7 @@ class MarkDecorator:
|
||||
holder = MarkInfo(self.markname, self.args, self.kwargs)
|
||||
setattr(func, self.markname, holder)
|
||||
else:
|
||||
holder.kwargs.update(self.kwargs)
|
||||
holder.args += self.args
|
||||
holder.add(self.args, self.kwargs)
|
||||
return func
|
||||
kw = self.kwargs.copy()
|
||||
kw.update(kwargs)
|
||||
@@ -150,27 +199,20 @@ class MarkInfo:
|
||||
self.args = args
|
||||
#: keyword argument dictionary, empty if nothing specified
|
||||
self.kwargs = kwargs
|
||||
self._arglist = [(args, kwargs.copy())]
|
||||
|
||||
def __repr__(self):
|
||||
return "<MarkInfo %r args=%r kwargs=%r>" % (
|
||||
self.name, self.args, self.kwargs)
|
||||
|
||||
def pytest_itemcollected(item):
|
||||
if not isinstance(item, pytest.Function):
|
||||
return
|
||||
try:
|
||||
func = item.obj.__func__
|
||||
except AttributeError:
|
||||
func = getattr(item.obj, 'im_func', item.obj)
|
||||
pyclasses = (pytest.Class, pytest.Module)
|
||||
for node in item.listchain():
|
||||
if isinstance(node, pyclasses):
|
||||
marker = getattr(node.obj, 'pytestmark', None)
|
||||
if marker is not None:
|
||||
if isinstance(marker, list):
|
||||
for mark in marker:
|
||||
mark(func)
|
||||
else:
|
||||
marker(func)
|
||||
node = node.parent
|
||||
item.keywords.update(py.builtin._getfuncdict(func))
|
||||
def add(self, args, kwargs):
|
||||
""" add a MarkInfo with the given args and kwargs. """
|
||||
self._arglist.append((args, kwargs))
|
||||
self.args += args
|
||||
self.kwargs.update(kwargs)
|
||||
|
||||
def __iter__(self):
|
||||
""" yield MarkInfo objects each relating to a marking-call. """
|
||||
for args, kwargs in self._arglist:
|
||||
yield MarkInfo(self.name, args, kwargs)
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
""" monkeypatching and mocking functionality. """
|
||||
|
||||
import os, sys
|
||||
import os, sys, inspect
|
||||
|
||||
def pytest_funcarg__monkeypatch(request):
|
||||
"""The returned ``monkeypatch`` funcarg provides these
|
||||
@@ -13,6 +13,7 @@ def pytest_funcarg__monkeypatch(request):
|
||||
monkeypatch.setenv(name, value, prepend=False)
|
||||
monkeypatch.delenv(name, value, raising=True)
|
||||
monkeypatch.syspath_prepend(path)
|
||||
monkeypatch.chdir(path)
|
||||
|
||||
All modifications will be undone after the requesting
|
||||
test function has finished. The ``raising``
|
||||
@@ -30,6 +31,7 @@ class monkeypatch:
|
||||
def __init__(self):
|
||||
self._setattr = []
|
||||
self._setitem = []
|
||||
self._cwd = None
|
||||
|
||||
def setattr(self, obj, name, value, raising=True):
|
||||
""" set attribute ``name`` on ``obj`` to ``value``, by default
|
||||
@@ -37,6 +39,10 @@ class monkeypatch:
|
||||
oldval = getattr(obj, name, notset)
|
||||
if raising and oldval is notset:
|
||||
raise AttributeError("%r has no attribute %r" %(obj, name))
|
||||
|
||||
# avoid class descriptors like staticmethod/classmethod
|
||||
if inspect.isclass(obj):
|
||||
oldval = obj.__dict__.get(name, notset)
|
||||
self._setattr.insert(0, (obj, name, oldval))
|
||||
setattr(obj, name, value)
|
||||
|
||||
@@ -83,6 +89,17 @@ class monkeypatch:
|
||||
self._savesyspath = sys.path[:]
|
||||
sys.path.insert(0, str(path))
|
||||
|
||||
def chdir(self, path):
|
||||
""" change the current working directory to the specified path
|
||||
path can be a string or a py.path.local object
|
||||
"""
|
||||
if self._cwd is None:
|
||||
self._cwd = os.getcwd()
|
||||
if hasattr(path, "chdir"):
|
||||
path.chdir()
|
||||
else:
|
||||
os.chdir(path)
|
||||
|
||||
def undo(self):
|
||||
""" undo previous changes. This call consumes the
|
||||
undo stack. Calling it a second time has no effect unless
|
||||
@@ -95,9 +112,17 @@ class monkeypatch:
|
||||
self._setattr[:] = []
|
||||
for dictionary, name, value in self._setitem:
|
||||
if value is notset:
|
||||
del dictionary[name]
|
||||
try:
|
||||
del dictionary[name]
|
||||
except KeyError:
|
||||
pass # was already deleted, so we have the desired state
|
||||
else:
|
||||
dictionary[name] = value
|
||||
self._setitem[:] = []
|
||||
if hasattr(self, '_savesyspath'):
|
||||
sys.path[:] = self._savesyspath
|
||||
del self._savesyspath
|
||||
|
||||
if self._cwd is not None:
|
||||
os.chdir(self._cwd)
|
||||
self._cwd = None
|
||||
|
||||
@@ -41,7 +41,7 @@ def pytest_make_collect_report(collector):
|
||||
|
||||
def call_optional(obj, name):
|
||||
method = getattr(obj, name, None)
|
||||
if method:
|
||||
if method is not None and not hasattr(method, "_pytestfixturefunction"):
|
||||
# If there's any problems allow the exception to raise rather than
|
||||
# silently ignoring them
|
||||
method()
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
import py, sys
|
||||
|
||||
class url:
|
||||
base = "http://paste.pocoo.org"
|
||||
base = "http://bpaste.net"
|
||||
xmlrpc = base + "/xmlrpc/"
|
||||
show = base + "/show/"
|
||||
|
||||
@@ -11,7 +11,7 @@ def pytest_addoption(parser):
|
||||
group._addoption('--pastebin', metavar="mode",
|
||||
action='store', dest="pastebin", default=None,
|
||||
type="choice", choices=['failed', 'all'],
|
||||
help="send failed|all info to Pocoo pastebin service.")
|
||||
help="send failed|all info to bpaste.net pastebin service.")
|
||||
|
||||
def pytest_configure(__multicall__, config):
|
||||
import tempfile
|
||||
@@ -38,7 +38,11 @@ def pytest_unconfigure(config):
|
||||
del tr._tw.__dict__['write']
|
||||
|
||||
def getproxy():
|
||||
return py.std.xmlrpclib.ServerProxy(url.xmlrpc).pastes
|
||||
if sys.version_info < (3, 0):
|
||||
from xmlrpclib import ServerProxy
|
||||
else:
|
||||
from xmlrpc.client import ServerProxy
|
||||
return ServerProxy(url.xmlrpc).pastes
|
||||
|
||||
def pytest_terminal_summary(terminalreporter):
|
||||
if terminalreporter.config.option.pastebin != "failed":
|
||||
|
||||
@@ -19,11 +19,13 @@ def pytest_configure(config):
|
||||
class pytestPDB:
|
||||
""" Pseudo PDB that defers to the real pdb. """
|
||||
item = None
|
||||
collector = None
|
||||
|
||||
def set_trace(self):
|
||||
""" invoke PDB set_trace debugging, dropping any IO capturing. """
|
||||
frame = sys._getframe().f_back
|
||||
item = getattr(self, 'item', None)
|
||||
item = self.item or self.collector
|
||||
|
||||
if item is not None:
|
||||
capman = item.config.pluginmanager.getplugin("capturemanager")
|
||||
out, err = capman.suspendcapture()
|
||||
@@ -38,9 +40,17 @@ def pdbitem(item):
|
||||
pytestPDB.item = item
|
||||
pytest_runtest_setup = pytest_runtest_call = pytest_runtest_teardown = pdbitem
|
||||
|
||||
@pytest.mark.tryfirst
|
||||
def pytest_make_collect_report(__multicall__, collector):
|
||||
try:
|
||||
pytestPDB.collector = collector
|
||||
return __multicall__.execute()
|
||||
finally:
|
||||
pytestPDB.collector = None
|
||||
|
||||
def pytest_runtest_makereport():
|
||||
pytestPDB.item = None
|
||||
|
||||
|
||||
class PdbInvoke:
|
||||
@pytest.mark.tryfirst
|
||||
def pytest_runtest_makereport(self, item, call, __multicall__):
|
||||
@@ -49,7 +59,7 @@ class PdbInvoke:
|
||||
call.excinfo.errisinstance(pytest.skip.Exception) or \
|
||||
call.excinfo.errisinstance(py.std.bdb.BdbQuit):
|
||||
return rep
|
||||
if "xfail" in rep.keywords:
|
||||
if hasattr(rep, "wasxfail"):
|
||||
return rep
|
||||
# we assume that the above execute() suspended capturing
|
||||
# XXX we re-use the TerminalReporter's terminalwriter
|
||||
@@ -60,7 +70,13 @@ class PdbInvoke:
|
||||
tw.sep(">", "traceback")
|
||||
rep.toterminal(tw)
|
||||
tw.sep(">", "entering PDB")
|
||||
post_mortem(call.excinfo._excinfo[2])
|
||||
# A doctest.UnexpectedException is not useful for post_mortem.
|
||||
# Use the underlying exception instead:
|
||||
if isinstance(call.excinfo.value, py.std.doctest.UnexpectedException):
|
||||
tb = call.excinfo.value.exc_info[2]
|
||||
else:
|
||||
tb = call.excinfo._excinfo[2]
|
||||
post_mortem(tb)
|
||||
rep._pdbshown = True
|
||||
return rep
|
||||
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
import py, pytest
|
||||
import sys, os
|
||||
import codecs
|
||||
import re
|
||||
import inspect
|
||||
import time
|
||||
@@ -25,6 +26,7 @@ def pytest_configure(config):
|
||||
_pytest_fullpath
|
||||
except NameError:
|
||||
_pytest_fullpath = os.path.abspath(pytest.__file__.rstrip("oc"))
|
||||
_pytest_fullpath = _pytest_fullpath.replace("$py.class", ".py")
|
||||
|
||||
def pytest_funcarg___pytest(request):
|
||||
return PytestArg(request)
|
||||
@@ -243,8 +245,10 @@ class TmpTestdir:
|
||||
ret = None
|
||||
for name, value in items:
|
||||
p = self.tmpdir.join(name).new(ext=ext)
|
||||
source = py.builtin._totext(py.code.Source(value)).lstrip()
|
||||
p.write(source.encode("utf-8"), "wb")
|
||||
source = py.builtin._totext(py.code.Source(value)).strip()
|
||||
content = source.encode("utf-8") # + "\n"
|
||||
#content = content.rstrip() + "\n"
|
||||
p.write(content, "wb")
|
||||
if ret is None:
|
||||
ret = p
|
||||
return ret
|
||||
@@ -313,21 +317,11 @@ class TmpTestdir:
|
||||
result.extend(session.genitems(colitem))
|
||||
return result
|
||||
|
||||
def inline_genitems(self, *args):
|
||||
#config = self.parseconfig(*args)
|
||||
config = self.parseconfigure(*args)
|
||||
rec = self.getreportrecorder(config)
|
||||
session = Session(config)
|
||||
config.hook.pytest_sessionstart(session=session)
|
||||
session.perform_collect()
|
||||
config.hook.pytest_sessionfinish(session=session, exitstatus=EXIT_OK)
|
||||
return session.items, rec
|
||||
|
||||
def runitem(self, source):
|
||||
# used from runner functional tests
|
||||
item = self.getitem(source)
|
||||
# the test class where we are called from wants to provide the runner
|
||||
testclassinstance = py.builtin._getimself(self.request.function)
|
||||
testclassinstance = self.request.instance
|
||||
runner = testclassinstance.getrunner()
|
||||
return runner(item)
|
||||
|
||||
@@ -343,71 +337,66 @@ class TmpTestdir:
|
||||
l = list(args) + [p]
|
||||
reprec = self.inline_run(*l)
|
||||
reports = reprec.getreports("pytest_runtest_logreport")
|
||||
assert len(reports) == 1, reports
|
||||
return reports[0]
|
||||
assert len(reports) == 3, reports # setup/call/teardown
|
||||
return reports[1]
|
||||
|
||||
def inline_genitems(self, *args):
|
||||
return self.inprocess_run(list(args) + ['--collectonly'])
|
||||
|
||||
def inline_run(self, *args):
|
||||
args = ("-s", ) + args # otherwise FD leakage
|
||||
config = self.parseconfig(*args)
|
||||
reprec = self.getreportrecorder(config)
|
||||
#config.pluginmanager.do_configure(config)
|
||||
config.hook.pytest_cmdline_main(config=config)
|
||||
#config.pluginmanager.do_unconfigure(config)
|
||||
return reprec
|
||||
items, rec = self.inprocess_run(args)
|
||||
return rec
|
||||
|
||||
def config_preparse(self):
|
||||
config = self.Config()
|
||||
for plugin in self.plugins:
|
||||
if isinstance(plugin, str):
|
||||
config.pluginmanager.import_plugin(plugin)
|
||||
else:
|
||||
if isinstance(plugin, dict):
|
||||
plugin = PseudoPlugin(plugin)
|
||||
if not config.pluginmanager.isregistered(plugin):
|
||||
config.pluginmanager.register(plugin)
|
||||
return config
|
||||
def inprocess_run(self, args, plugins=None):
|
||||
rec = []
|
||||
items = []
|
||||
class Collect:
|
||||
def pytest_configure(x, config):
|
||||
rec.append(self.getreportrecorder(config))
|
||||
def pytest_itemcollected(self, item):
|
||||
items.append(item)
|
||||
if not plugins:
|
||||
plugins = []
|
||||
plugins.append(Collect())
|
||||
ret = self.pytestmain(list(args), plugins=plugins)
|
||||
reprec = rec[0]
|
||||
reprec.ret = ret
|
||||
assert len(rec) == 1
|
||||
return items, reprec
|
||||
|
||||
def parseconfig(self, *args):
|
||||
if not args:
|
||||
args = (self.tmpdir,)
|
||||
config = self.config_preparse()
|
||||
args = list(args)
|
||||
args = [str(x) for x in args]
|
||||
for x in args:
|
||||
if str(x).startswith('--basetemp'):
|
||||
break
|
||||
else:
|
||||
args.append("--basetemp=%s" % self.tmpdir.dirpath('basetemp'))
|
||||
config.parse(args)
|
||||
import _pytest.core
|
||||
config = _pytest.core._prepareconfig(args, self.plugins)
|
||||
# the in-process pytest invocation needs to avoid leaking FDs
|
||||
# so we register a "reset_capturings" callmon the capturing manager
|
||||
# and make sure it gets called
|
||||
config._cleanup.append(
|
||||
config.pluginmanager.getplugin("capturemanager").reset_capturings)
|
||||
import _pytest.config
|
||||
self.request.addfinalizer(
|
||||
lambda: _pytest.config.pytest_unconfigure(config))
|
||||
return config
|
||||
|
||||
def reparseconfig(self, args=None):
|
||||
""" this is used from tests that want to re-invoke parse(). """
|
||||
if not args:
|
||||
args = [self.tmpdir]
|
||||
oldconfig = getattr(py.test, 'config', None)
|
||||
try:
|
||||
c = py.test.config = self.Config()
|
||||
c.basetemp = py.path.local.make_numbered_dir(prefix="reparse",
|
||||
keep=0, rootdir=self.tmpdir, lock_timeout=None)
|
||||
c.parse(args)
|
||||
c.pluginmanager.do_configure(c)
|
||||
self.request.addfinalizer(lambda: c.pluginmanager.do_unconfigure(c))
|
||||
return c
|
||||
finally:
|
||||
py.test.config = oldconfig
|
||||
|
||||
def parseconfigure(self, *args):
|
||||
config = self.parseconfig(*args)
|
||||
config.pluginmanager.do_configure(config)
|
||||
self.request.addfinalizer(lambda:
|
||||
config.pluginmanager.do_unconfigure(config))
|
||||
config.pluginmanager.do_unconfigure(config))
|
||||
return config
|
||||
|
||||
def getitem(self, source, funcname="test_func"):
|
||||
for item in self.getitems(source):
|
||||
items = self.getitems(source)
|
||||
for item in items:
|
||||
if item.name == funcname:
|
||||
return item
|
||||
assert 0, "%r item not found in module:\n%s" %(funcname, source)
|
||||
assert 0, "%r item not found in module:\n%s\nitems: %s" %(
|
||||
funcname, source, items)
|
||||
|
||||
def getitems(self, source):
|
||||
modcol = self.getmodulecol(source)
|
||||
@@ -420,7 +409,6 @@ class TmpTestdir:
|
||||
self.makepyfile(__init__ = "#")
|
||||
self.config = config = self.parseconfigure(path, *configargs)
|
||||
node = self.getnode(config, path)
|
||||
#config.pluginmanager.do_unconfigure(config)
|
||||
return node
|
||||
|
||||
def collect_by_name(self, modcol, name):
|
||||
@@ -437,9 +425,16 @@ class TmpTestdir:
|
||||
return py.std.subprocess.Popen(cmdargs, stdout=stdout, stderr=stderr, **kw)
|
||||
|
||||
def pytestmain(self, *args, **kwargs):
|
||||
ret = pytest.main(*args, **kwargs)
|
||||
if ret == 2:
|
||||
raise KeyboardInterrupt()
|
||||
class ResetCapturing:
|
||||
@pytest.mark.trylast
|
||||
def pytest_unconfigure(self, config):
|
||||
capman = config.pluginmanager.getplugin("capturemanager")
|
||||
capman.reset_capturings()
|
||||
plugins = kwargs.setdefault("plugins", [])
|
||||
rc = ResetCapturing()
|
||||
plugins.append(rc)
|
||||
return pytest.main(*args, **kwargs)
|
||||
|
||||
def run(self, *cmdargs):
|
||||
return self._run(*cmdargs)
|
||||
|
||||
@@ -448,28 +443,35 @@ class TmpTestdir:
|
||||
p1 = self.tmpdir.join("stdout")
|
||||
p2 = self.tmpdir.join("stderr")
|
||||
print_("running", cmdargs, "curdir=", py.path.local())
|
||||
f1 = p1.open("wb")
|
||||
f2 = p2.open("wb")
|
||||
now = time.time()
|
||||
popen = self.popen(cmdargs, stdout=f1, stderr=f2,
|
||||
close_fds=(sys.platform != "win32"))
|
||||
ret = popen.wait()
|
||||
f1.close()
|
||||
f2.close()
|
||||
out = p1.read("rb")
|
||||
out = getdecoded(out).splitlines()
|
||||
err = p2.read("rb")
|
||||
err = getdecoded(err).splitlines()
|
||||
def dump_lines(lines, fp):
|
||||
try:
|
||||
for line in lines:
|
||||
py.builtin.print_(line, file=fp)
|
||||
except UnicodeEncodeError:
|
||||
print("couldn't print to %s because of encoding" % (fp,))
|
||||
dump_lines(out, sys.stdout)
|
||||
dump_lines(err, sys.stderr)
|
||||
f1 = codecs.open(str(p1), "w", encoding="utf8")
|
||||
f2 = codecs.open(str(p2), "w", encoding="utf8")
|
||||
try:
|
||||
now = time.time()
|
||||
popen = self.popen(cmdargs, stdout=f1, stderr=f2,
|
||||
close_fds=(sys.platform != "win32"))
|
||||
ret = popen.wait()
|
||||
finally:
|
||||
f1.close()
|
||||
f2.close()
|
||||
f1 = codecs.open(str(p1), "r", encoding="utf8")
|
||||
f2 = codecs.open(str(p2), "r", encoding="utf8")
|
||||
try:
|
||||
out = f1.read().splitlines()
|
||||
err = f2.read().splitlines()
|
||||
finally:
|
||||
f1.close()
|
||||
f2.close()
|
||||
self._dump_lines(out, sys.stdout)
|
||||
self._dump_lines(err, sys.stderr)
|
||||
return RunResult(ret, out, err, time.time()-now)
|
||||
|
||||
def _dump_lines(self, lines, fp):
|
||||
try:
|
||||
for line in lines:
|
||||
py.builtin.print_(line, file=fp)
|
||||
except UnicodeEncodeError:
|
||||
print("couldn't print to %s because of encoding" % (fp,))
|
||||
|
||||
def runpybin(self, scriptname, *args):
|
||||
fullargs = self._getpybinargs(scriptname) + args
|
||||
return self.run(*fullargs)
|
||||
@@ -528,6 +530,10 @@ class TmpTestdir:
|
||||
pexpect = py.test.importorskip("pexpect", "2.4")
|
||||
if hasattr(sys, 'pypy_version_info') and '64' in py.std.platform.machine():
|
||||
pytest.skip("pypy-64 bit not supported")
|
||||
if sys.platform == "darwin":
|
||||
pytest.xfail("pexpect does not work reliably on darwin?!")
|
||||
if sys.platform.startswith("freebsd"):
|
||||
pytest.xfail("pexpect does not work reliably on freebsd")
|
||||
logfile = self.tmpdir.join("spawn.out")
|
||||
child = pexpect.spawn(cmd, logfile=logfile.open("w"))
|
||||
child.timeout = expect_timeout
|
||||
@@ -540,10 +546,6 @@ def getdecoded(out):
|
||||
return "INTERNAL not-utf8-decodeable, truncated string:\n%s" % (
|
||||
py.io.saferepr(out),)
|
||||
|
||||
class PseudoPlugin:
|
||||
def __init__(self, vars):
|
||||
self.__dict__.update(vars)
|
||||
|
||||
class ReportRecorder(object):
|
||||
def __init__(self, hook):
|
||||
self.hook = hook
|
||||
@@ -565,10 +567,17 @@ class ReportRecorder(object):
|
||||
def getreports(self, names="pytest_runtest_logreport pytest_collectreport"):
|
||||
return [x.report for x in self.getcalls(names)]
|
||||
|
||||
def matchreport(self, inamepart="", names="pytest_runtest_logreport pytest_collectreport", when=None):
|
||||
def matchreport(self, inamepart="",
|
||||
names="pytest_runtest_logreport pytest_collectreport", when=None):
|
||||
""" return a testreport whose dotted import path matches """
|
||||
l = []
|
||||
for rep in self.getreports(names=names):
|
||||
try:
|
||||
if not when and rep.when != "call" and rep.passed:
|
||||
# setup/teardown passing reports - let's ignore those
|
||||
continue
|
||||
except AttributeError:
|
||||
pass
|
||||
if when and getattr(rep, 'when', None) != when:
|
||||
continue
|
||||
if not inamepart or inamepart in rep.nodeid.split("::"):
|
||||
|
||||
1688
_pytest/python.py
1688
_pytest/python.py
File diff suppressed because it is too large
Load Diff
@@ -1,4 +1,6 @@
|
||||
""" (disabled by default) create result information in a plain text file. """
|
||||
""" log machine-parseable test session result information in a plain
|
||||
text file.
|
||||
"""
|
||||
|
||||
import py
|
||||
|
||||
@@ -63,6 +65,8 @@ class ResultLog(object):
|
||||
self.write_log_entry(testpath, lettercode, longrepr)
|
||||
|
||||
def pytest_runtest_logreport(self, report):
|
||||
if report.when != "call" and report.passed:
|
||||
return
|
||||
res = self.config.hook.pytest_report_teststatus(report=report)
|
||||
code = res[1]
|
||||
if code == 'x':
|
||||
@@ -89,5 +93,8 @@ class ResultLog(object):
|
||||
self.log_outcome(report, code, longrepr)
|
||||
|
||||
def pytest_internalerror(self, excrepr):
|
||||
path = excrepr.reprcrash.path
|
||||
reprcrash = getattr(excrepr, 'reprcrash', None)
|
||||
path = getattr(reprcrash, "path", None)
|
||||
if path is None:
|
||||
path = "cwd:%s" % py.path.local()
|
||||
self.write_log_entry(path, '!', str(excrepr))
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
""" basic collect and runtest protocol implementations """
|
||||
|
||||
import py, sys
|
||||
from time import time
|
||||
from py._code.code import TerminalRepr
|
||||
|
||||
def pytest_namespace():
|
||||
@@ -14,33 +15,60 @@ def pytest_namespace():
|
||||
#
|
||||
# pytest plugin hooks
|
||||
|
||||
def pytest_addoption(parser):
|
||||
group = parser.getgroup("terminal reporting", "reporting", after="general")
|
||||
group.addoption('--durations',
|
||||
action="store", type="int", default=None, metavar="N",
|
||||
help="show N slowest setup/test durations (N=0 for all)."),
|
||||
|
||||
def pytest_terminal_summary(terminalreporter):
|
||||
durations = terminalreporter.config.option.durations
|
||||
if durations is None:
|
||||
return
|
||||
tr = terminalreporter
|
||||
dlist = []
|
||||
for replist in tr.stats.values():
|
||||
for rep in replist:
|
||||
if hasattr(rep, 'duration'):
|
||||
dlist.append(rep)
|
||||
if not dlist:
|
||||
return
|
||||
dlist.sort(key=lambda x: x.duration)
|
||||
dlist.reverse()
|
||||
if not durations:
|
||||
tr.write_sep("=", "slowest test durations")
|
||||
else:
|
||||
tr.write_sep("=", "slowest %s test durations" % durations)
|
||||
dlist = dlist[:durations]
|
||||
|
||||
for rep in dlist:
|
||||
nodeid = rep.nodeid.replace("::()::", "::")
|
||||
tr.write_line("%02.2fs %-8s %s" %
|
||||
(rep.duration, rep.when, nodeid))
|
||||
|
||||
def pytest_sessionstart(session):
|
||||
session._setupstate = SetupState()
|
||||
|
||||
def pytest_sessionfinish(session, exitstatus):
|
||||
hook = session.config.hook
|
||||
rep = hook.pytest__teardown_final(session=session)
|
||||
if rep:
|
||||
hook.pytest__teardown_final_logerror(session=session, report=rep)
|
||||
session.exitstatus = 1
|
||||
def pytest_sessionfinish(session):
|
||||
session._setupstate.teardown_all()
|
||||
|
||||
class NodeInfo:
|
||||
def __init__(self, location):
|
||||
self.location = location
|
||||
|
||||
def pytest_runtest_protocol(item):
|
||||
def pytest_runtest_protocol(item, nextitem):
|
||||
item.ihook.pytest_runtest_logstart(
|
||||
nodeid=item.nodeid, location=item.location,
|
||||
)
|
||||
runtestprotocol(item)
|
||||
runtestprotocol(item, nextitem=nextitem)
|
||||
return True
|
||||
|
||||
def runtestprotocol(item, log=True):
|
||||
def runtestprotocol(item, log=True, nextitem=None):
|
||||
rep = call_and_report(item, "setup", log)
|
||||
reports = [rep]
|
||||
if rep.passed:
|
||||
reports.append(call_and_report(item, "call", log))
|
||||
reports.append(call_and_report(item, "teardown", log))
|
||||
reports.append(call_and_report(item, "teardown", log,
|
||||
nextitem=nextitem))
|
||||
return reports
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
@@ -49,16 +77,8 @@ def pytest_runtest_setup(item):
|
||||
def pytest_runtest_call(item):
|
||||
item.runtest()
|
||||
|
||||
def pytest_runtest_teardown(item):
|
||||
item.session._setupstate.teardown_exact(item)
|
||||
|
||||
def pytest__teardown_final(session):
|
||||
call = CallInfo(session._setupstate.teardown_all, when="teardown")
|
||||
if call.excinfo:
|
||||
ntraceback = call.excinfo.traceback .cut(excludepath=py._pydir)
|
||||
call.excinfo.traceback = ntraceback.filter()
|
||||
longrepr = call.excinfo.getrepr(funcargs=True)
|
||||
return TeardownErrorReport(longrepr)
|
||||
def pytest_runtest_teardown(item, nextitem):
|
||||
item.session._setupstate.teardown_exact(item, nextitem)
|
||||
|
||||
def pytest_report_teststatus(report):
|
||||
if report.when in ("setup", "teardown"):
|
||||
@@ -74,18 +94,18 @@ def pytest_report_teststatus(report):
|
||||
#
|
||||
# Implementation
|
||||
|
||||
def call_and_report(item, when, log=True):
|
||||
call = call_runtest_hook(item, when)
|
||||
def call_and_report(item, when, log=True, **kwds):
|
||||
call = call_runtest_hook(item, when, **kwds)
|
||||
hook = item.ihook
|
||||
report = hook.pytest_runtest_makereport(item=item, call=call)
|
||||
if log and (when == "call" or not report.passed):
|
||||
if log:
|
||||
hook.pytest_runtest_logreport(report=report)
|
||||
return report
|
||||
|
||||
def call_runtest_hook(item, when):
|
||||
def call_runtest_hook(item, when, **kwds):
|
||||
hookname = "pytest_runtest_" + when
|
||||
ihook = getattr(item.ihook, hookname)
|
||||
return CallInfo(lambda: ihook(item=item), when=when)
|
||||
return CallInfo(lambda: ihook(item=item, **kwds), when=when)
|
||||
|
||||
class CallInfo:
|
||||
""" Result/Exception info a function invocation. """
|
||||
@@ -95,12 +115,16 @@ class CallInfo:
|
||||
#: context of invocation: one of "setup", "call",
|
||||
#: "teardown", "memocollect"
|
||||
self.when = when
|
||||
self.start = time()
|
||||
try:
|
||||
self.result = func()
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except:
|
||||
self.excinfo = py.code.ExceptionInfo()
|
||||
try:
|
||||
self.result = func()
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except:
|
||||
self.excinfo = py.code.ExceptionInfo()
|
||||
finally:
|
||||
self.stop = time()
|
||||
|
||||
def __repr__(self):
|
||||
if self.excinfo:
|
||||
@@ -120,6 +144,10 @@ def getslaveinfoline(node):
|
||||
return s
|
||||
|
||||
class BaseReport(object):
|
||||
|
||||
def __init__(self, **kw):
|
||||
self.__dict__.update(kw)
|
||||
|
||||
def toterminal(self, out):
|
||||
longrepr = self.longrepr
|
||||
if hasattr(self, 'node'):
|
||||
@@ -127,7 +155,10 @@ class BaseReport(object):
|
||||
if hasattr(longrepr, 'toterminal'):
|
||||
longrepr.toterminal(out)
|
||||
else:
|
||||
out.line(str(longrepr))
|
||||
try:
|
||||
out.line(longrepr)
|
||||
except UnicodeEncodeError:
|
||||
out.line("<unprintable longrepr>")
|
||||
|
||||
passed = property(lambda x: x.outcome == "passed")
|
||||
failed = property(lambda x: x.outcome == "failed")
|
||||
@@ -139,6 +170,7 @@ class BaseReport(object):
|
||||
|
||||
def pytest_runtest_makereport(item, call):
|
||||
when = call.when
|
||||
duration = call.stop-call.start
|
||||
keywords = dict([(x,1) for x in item.keywords])
|
||||
excinfo = call.excinfo
|
||||
if not call.excinfo:
|
||||
@@ -160,14 +192,15 @@ def pytest_runtest_makereport(item, call):
|
||||
else: # exception in setup or teardown
|
||||
longrepr = item._repr_failure_py(excinfo)
|
||||
return TestReport(item.nodeid, item.location,
|
||||
keywords, outcome, longrepr, when)
|
||||
keywords, outcome, longrepr, when,
|
||||
duration=duration)
|
||||
|
||||
class TestReport(BaseReport):
|
||||
""" Basic test report object (also used for setup and teardown calls if
|
||||
they fail).
|
||||
"""
|
||||
def __init__(self, nodeid, location,
|
||||
keywords, outcome, longrepr, when):
|
||||
keywords, outcome, longrepr, when, sections=(), duration=0, **extra):
|
||||
#: normalized collection node id
|
||||
self.nodeid = nodeid
|
||||
|
||||
@@ -179,16 +212,25 @@ class TestReport(BaseReport):
|
||||
#: a name -> value dictionary containing all keywords and
|
||||
#: markers associated with a test invocation.
|
||||
self.keywords = keywords
|
||||
|
||||
|
||||
#: test outcome, always one of "passed", "failed", "skipped".
|
||||
self.outcome = outcome
|
||||
|
||||
#: None or a failure representation.
|
||||
self.longrepr = longrepr
|
||||
|
||||
|
||||
#: one of 'setup', 'call', 'teardown' to indicate runtest phase.
|
||||
self.when = when
|
||||
|
||||
#: list of (secname, data) extra information which needs to
|
||||
#: marshallable
|
||||
self.sections = list(sections)
|
||||
|
||||
#: time it took to run just the test
|
||||
self.duration = duration
|
||||
|
||||
self.__dict__.update(extra)
|
||||
|
||||
def __repr__(self):
|
||||
return "<TestReport %r when=%r outcome=%r>" % (
|
||||
self.nodeid, self.when, self.outcome)
|
||||
@@ -196,8 +238,10 @@ class TestReport(BaseReport):
|
||||
class TeardownErrorReport(BaseReport):
|
||||
outcome = "failed"
|
||||
when = "teardown"
|
||||
def __init__(self, longrepr):
|
||||
def __init__(self, longrepr, **extra):
|
||||
self.longrepr = longrepr
|
||||
self.sections = []
|
||||
self.__dict__.update(extra)
|
||||
|
||||
def pytest_make_collect_report(collector):
|
||||
call = CallInfo(collector._memocollect, "memocollect")
|
||||
@@ -219,11 +263,13 @@ def pytest_make_collect_report(collector):
|
||||
getattr(call, 'result', None))
|
||||
|
||||
class CollectReport(BaseReport):
|
||||
def __init__(self, nodeid, outcome, longrepr, result):
|
||||
def __init__(self, nodeid, outcome, longrepr, result, sections=(), **extra):
|
||||
self.nodeid = nodeid
|
||||
self.outcome = outcome
|
||||
self.longrepr = longrepr
|
||||
self.result = result or []
|
||||
self.sections = list(sections)
|
||||
self.__dict__.update(extra)
|
||||
|
||||
@property
|
||||
def location(self):
|
||||
@@ -237,7 +283,7 @@ class CollectErrorRepr(TerminalRepr):
|
||||
def __init__(self, msg):
|
||||
self.longrepr = msg
|
||||
def toterminal(self, out):
|
||||
out.line(str(self.longrepr), red=True)
|
||||
out.line(self.longrepr, red=True)
|
||||
|
||||
class SetupState(object):
|
||||
""" shared state for setting up/tearing down test items or collectors. """
|
||||
@@ -249,6 +295,8 @@ class SetupState(object):
|
||||
""" attach a finalizer to the given colitem.
|
||||
if colitem is None, this will add a finalizer that
|
||||
is called at the end of teardown_all().
|
||||
if colitem is a tuple, it will be used as a key
|
||||
and needs an explicit call to _callfinalizers(key) later on.
|
||||
"""
|
||||
assert hasattr(finalizer, '__call__')
|
||||
#assert colitem in self.stack
|
||||
@@ -266,31 +314,35 @@ class SetupState(object):
|
||||
|
||||
def _teardown_with_finalization(self, colitem):
|
||||
self._callfinalizers(colitem)
|
||||
if colitem:
|
||||
if hasattr(colitem, "teardown"):
|
||||
colitem.teardown()
|
||||
for colitem in self._finalizers:
|
||||
assert colitem is None or colitem in self.stack
|
||||
assert colitem is None or colitem in self.stack \
|
||||
or isinstance(colitem, tuple)
|
||||
|
||||
def teardown_all(self):
|
||||
while self.stack:
|
||||
self._pop_and_teardown()
|
||||
self._teardown_with_finalization(None)
|
||||
for key in list(self._finalizers):
|
||||
self._teardown_with_finalization(key)
|
||||
assert not self._finalizers
|
||||
|
||||
def teardown_exact(self, item):
|
||||
if self.stack and item == self.stack[-1]:
|
||||
def teardown_exact(self, item, nextitem):
|
||||
needed_collectors = nextitem and nextitem.listchain() or []
|
||||
self._teardown_towards(needed_collectors)
|
||||
|
||||
def _teardown_towards(self, needed_collectors):
|
||||
while self.stack:
|
||||
if self.stack == needed_collectors[:len(self.stack)]:
|
||||
break
|
||||
self._pop_and_teardown()
|
||||
else:
|
||||
self._callfinalizers(item)
|
||||
|
||||
def prepare(self, colitem):
|
||||
""" setup objects along the collector chain to the test-method
|
||||
and teardown previously setup objects."""
|
||||
needed_collectors = colitem.listchain()
|
||||
while self.stack:
|
||||
if self.stack == needed_collectors[:len(self.stack)]:
|
||||
break
|
||||
self._pop_and_teardown()
|
||||
self._teardown_towards(needed_collectors)
|
||||
|
||||
# check if the last collection node has raised an error
|
||||
for col in self.stack:
|
||||
if hasattr(col, '_prepare_exc'):
|
||||
@@ -357,7 +409,9 @@ skip.Exception = Skipped
|
||||
|
||||
def fail(msg="", pytrace=True):
|
||||
""" explicitely fail an currently-executing test with the given Message.
|
||||
if @pytrace is not True the msg represents the full failure information.
|
||||
|
||||
:arg pytrace: if false the msg represents the full failure information
|
||||
and no python traceback will be reported.
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
raise Failed(msg=msg, pytrace=pytrace)
|
||||
@@ -372,9 +426,10 @@ def importorskip(modname, minversion=None):
|
||||
__tracebackhide__ = True
|
||||
compile(modname, '', 'eval') # to catch syntaxerrors
|
||||
try:
|
||||
mod = __import__(modname, None, None, ['__doc__'])
|
||||
__import__(modname)
|
||||
except ImportError:
|
||||
py.test.skip("could not import %r" %(modname,))
|
||||
mod = sys.modules[modname]
|
||||
if minversion is None:
|
||||
return mod
|
||||
verattr = getattr(mod, '__version__', None)
|
||||
|
||||
@@ -9,6 +9,22 @@ def pytest_addoption(parser):
|
||||
action="store_true", dest="runxfail", default=False,
|
||||
help="run tests even if they are marked xfail")
|
||||
|
||||
def pytest_configure(config):
|
||||
config.addinivalue_line("markers",
|
||||
"skipif(condition): skip the given test function if eval(condition) "
|
||||
"results in a True value. Evaluation happens within the "
|
||||
"module global context. Example: skipif('sys.platform == \"win32\"') "
|
||||
"skips the test if we are on the win32 platform. see "
|
||||
"http://pytest.org/latest/skipping.html"
|
||||
)
|
||||
config.addinivalue_line("markers",
|
||||
"xfail(condition, reason=None, run=True): mark the the test function "
|
||||
"as an expected failure if eval(condition) has a True value. "
|
||||
"Optionally specify a reason for better reporting and run=False if "
|
||||
"you don't even want to execute the test function. See "
|
||||
"http://pytest.org/latest/skipping.html"
|
||||
)
|
||||
|
||||
def pytest_namespace():
|
||||
return dict(xfail=xfail)
|
||||
|
||||
@@ -95,6 +111,7 @@ class MarkEvaluator:
|
||||
return expl
|
||||
|
||||
|
||||
@pytest.mark.tryfirst
|
||||
def pytest_runtest_setup(item):
|
||||
if not isinstance(item, pytest.Function):
|
||||
return
|
||||
@@ -117,6 +134,14 @@ def check_xfail_no_run(item):
|
||||
def pytest_runtest_makereport(__multicall__, item, call):
|
||||
if not isinstance(item, pytest.Function):
|
||||
return
|
||||
# unitttest special case, see setting of _unexpectedsuccess
|
||||
if hasattr(item, '_unexpectedsuccess'):
|
||||
rep = __multicall__.execute()
|
||||
if rep.when == "call":
|
||||
# we need to translate into how py.test encodes xpass
|
||||
rep.wasxfail = "reason: " + repr(item._unexpectedsuccess)
|
||||
rep.outcome = "failed"
|
||||
return rep
|
||||
if not (call.excinfo and
|
||||
call.excinfo.errisinstance(py.test.xfail.Exception)):
|
||||
evalxfail = getattr(item, '_evalxfail', None)
|
||||
@@ -125,27 +150,27 @@ def pytest_runtest_makereport(__multicall__, item, call):
|
||||
if call.excinfo and call.excinfo.errisinstance(py.test.xfail.Exception):
|
||||
if not item.config.getvalue("runxfail"):
|
||||
rep = __multicall__.execute()
|
||||
rep.keywords['xfail'] = "reason: " + call.excinfo.value.msg
|
||||
rep.wasxfail = "reason: " + call.excinfo.value.msg
|
||||
rep.outcome = "skipped"
|
||||
return rep
|
||||
rep = __multicall__.execute()
|
||||
evalxfail = item._evalxfail
|
||||
if not item.config.option.runxfail:
|
||||
if evalxfail.wasvalid() and evalxfail.istrue():
|
||||
if call.excinfo:
|
||||
rep.outcome = "skipped"
|
||||
rep.keywords['xfail'] = evalxfail.getexplanation()
|
||||
elif call.when == "call":
|
||||
rep.outcome = "failed"
|
||||
rep.keywords['xfail'] = evalxfail.getexplanation()
|
||||
return rep
|
||||
if 'xfail' in rep.keywords:
|
||||
del rep.keywords['xfail']
|
||||
if not rep.skipped:
|
||||
if not item.config.option.runxfail:
|
||||
if evalxfail.wasvalid() and evalxfail.istrue():
|
||||
if call.excinfo:
|
||||
rep.outcome = "skipped"
|
||||
elif call.when == "call":
|
||||
rep.outcome = "failed"
|
||||
else:
|
||||
return rep
|
||||
rep.wasxfail = evalxfail.getexplanation()
|
||||
return rep
|
||||
return rep
|
||||
|
||||
# called by terminalreporter progress reporting
|
||||
def pytest_report_teststatus(report):
|
||||
if 'xfail' in report.keywords:
|
||||
if hasattr(report, "wasxfail"):
|
||||
if report.skipped:
|
||||
return "xfailed", "x", "xfail"
|
||||
elif report.failed:
|
||||
@@ -169,28 +194,30 @@ def pytest_terminal_summary(terminalreporter):
|
||||
elif char == "X":
|
||||
show_xpassed(terminalreporter, lines)
|
||||
elif char in "fF":
|
||||
show_failed(terminalreporter, lines)
|
||||
show_simple(terminalreporter, lines, 'failed', "FAIL %s")
|
||||
elif char in "sS":
|
||||
show_skipped(terminalreporter, lines)
|
||||
elif char == "E":
|
||||
show_simple(terminalreporter, lines, 'error', "ERROR %s")
|
||||
if lines:
|
||||
tr._tw.sep("=", "short test summary info")
|
||||
for line in lines:
|
||||
tr._tw.line(line)
|
||||
|
||||
def show_failed(terminalreporter, lines):
|
||||
def show_simple(terminalreporter, lines, stat, format):
|
||||
tw = terminalreporter._tw
|
||||
failed = terminalreporter.stats.get("failed")
|
||||
failed = terminalreporter.stats.get(stat)
|
||||
if failed:
|
||||
for rep in failed:
|
||||
pos = rep.nodeid
|
||||
lines.append("FAIL %s" %(pos, ))
|
||||
lines.append(format %(pos, ))
|
||||
|
||||
def show_xfailed(terminalreporter, lines):
|
||||
xfailed = terminalreporter.stats.get("xfailed")
|
||||
if xfailed:
|
||||
for rep in xfailed:
|
||||
pos = rep.nodeid
|
||||
reason = rep.keywords['xfail']
|
||||
reason = rep.wasxfail
|
||||
lines.append("XFAIL %s" % (pos,))
|
||||
if reason:
|
||||
lines.append(" " + str(reason))
|
||||
@@ -200,7 +227,7 @@ def show_xpassed(terminalreporter, lines):
|
||||
if xpassed:
|
||||
for rep in xpassed:
|
||||
pos = rep.nodeid
|
||||
reason = rep.keywords['xfail']
|
||||
reason = rep.wasxfail
|
||||
lines.append("XPASS %s %s" %(pos, reason))
|
||||
|
||||
def cached_eval(config, expr, d):
|
||||
|
||||
@@ -2,7 +2,8 @@
|
||||
|
||||
This is a good source for looking at the various reporting hooks.
|
||||
"""
|
||||
import pytest, py
|
||||
import pytest
|
||||
import py
|
||||
import sys
|
||||
import os
|
||||
|
||||
@@ -11,11 +12,11 @@ def pytest_addoption(parser):
|
||||
group._addoption('-v', '--verbose', action="count",
|
||||
dest="verbose", default=0, help="increase verbosity."),
|
||||
group._addoption('-q', '--quiet', action="count",
|
||||
dest="quiet", default=0, help="decreate verbosity."),
|
||||
dest="quiet", default=0, help="decrease verbosity."),
|
||||
group._addoption('-r',
|
||||
action="store", dest="reportchars", default=None, metavar="chars",
|
||||
help="show extra test summary info as specified by chars (f)ailed, "
|
||||
"(s)skipped, (x)failed, (X)passed.")
|
||||
"(E)error, (s)skipped, (x)failed, (X)passed.")
|
||||
group._addoption('-l', '--showlocals',
|
||||
action="store_true", dest="showlocals", default=False,
|
||||
help="show locals in tracebacks (disabled by default).")
|
||||
@@ -37,13 +38,15 @@ def pytest_configure(config):
|
||||
stdout = py.std.sys.stdout
|
||||
if hasattr(os, 'dup') and hasattr(stdout, 'fileno'):
|
||||
try:
|
||||
newfd = os.dup(stdout.fileno())
|
||||
#print "got newfd", newfd
|
||||
newstdout = py.io.dupfile(stdout, buffering=1,
|
||||
encoding=stdout.encoding)
|
||||
except ValueError:
|
||||
pass
|
||||
else:
|
||||
stdout = os.fdopen(newfd, stdout.mode, 1)
|
||||
config._toclose = stdout
|
||||
config._cleanup.append(lambda: newstdout.close())
|
||||
assert stdout.encoding == newstdout.encoding
|
||||
stdout = newstdout
|
||||
|
||||
reporter = TerminalReporter(config, stdout)
|
||||
config.pluginmanager.register(reporter, 'terminalreporter')
|
||||
if config.option.debug or config.option.traceconfig:
|
||||
@@ -52,11 +55,6 @@ def pytest_configure(config):
|
||||
reporter.write_line("[traceconfig] " + msg)
|
||||
config.trace.root.setprocessor("pytest:config", mywriter)
|
||||
|
||||
def pytest_unconfigure(config):
|
||||
if hasattr(config, '_toclose'):
|
||||
#print "closing", config._toclose, config._toclose.fileno()
|
||||
config._toclose.close()
|
||||
|
||||
def getreportopt(config):
|
||||
reportopts = ""
|
||||
optvalue = config.option.report
|
||||
@@ -98,7 +96,7 @@ class TerminalReporter:
|
||||
self._numcollected = 0
|
||||
|
||||
self.stats = {}
|
||||
self.curdir = py.path.local()
|
||||
self.startdir = self.curdir = py.path.local()
|
||||
if file is None:
|
||||
file = py.std.sys.stdout
|
||||
self._tw = py.io.TerminalWriter(file)
|
||||
@@ -113,9 +111,9 @@ class TerminalReporter:
|
||||
def write_fspath_result(self, fspath, res):
|
||||
if fspath != self.currentfspath:
|
||||
self.currentfspath = fspath
|
||||
#fspath = self.curdir.bestrelpath(fspath)
|
||||
#fspath = self.startdir.bestrelpath(fspath)
|
||||
self._tw.line()
|
||||
#relpath = self.curdir.bestrelpath(fspath)
|
||||
#relpath = self.startdir.bestrelpath(fspath)
|
||||
self._tw.write(fspath + " ")
|
||||
self._tw.write(res)
|
||||
|
||||
@@ -165,9 +163,6 @@ class TerminalReporter:
|
||||
def pytest_deselected(self, items):
|
||||
self.stats.setdefault('deselected', []).extend(items)
|
||||
|
||||
def pytest__teardown_final_logerror(self, report):
|
||||
self.stats.setdefault("error", []).append(report)
|
||||
|
||||
def pytest_runtest_logstart(self, nodeid, location):
|
||||
# ensure that the path is printed before the
|
||||
# 1st test of a module starts running
|
||||
@@ -214,7 +209,7 @@ class TerminalReporter:
|
||||
self.currentfspath = -2
|
||||
|
||||
def pytest_collection(self):
|
||||
if not self.hasmarkup:
|
||||
if not self.hasmarkup and self.config.option.verbose >=1:
|
||||
self.write("collecting ... ", bold=True)
|
||||
|
||||
def pytest_collectreport(self, report):
|
||||
@@ -229,6 +224,9 @@ class TerminalReporter:
|
||||
self.report_collect()
|
||||
|
||||
def report_collect(self, final=False):
|
||||
if self.config.option.verbose < 0:
|
||||
return
|
||||
|
||||
errors = len(self.stats.get('error', []))
|
||||
skipped = len(self.stats.get('skipped', []))
|
||||
if final:
|
||||
@@ -250,6 +248,7 @@ class TerminalReporter:
|
||||
def pytest_collection_modifyitems(self):
|
||||
self.report_collect(True)
|
||||
|
||||
@pytest.mark.trylast
|
||||
def pytest_sessionstart(self, session):
|
||||
self._sessionstarttime = py.std.time.time()
|
||||
if not self.showheader:
|
||||
@@ -265,11 +264,23 @@ class TerminalReporter:
|
||||
getattr(self.config.option, 'pastebin', None):
|
||||
msg += " -- " + str(sys.executable)
|
||||
self.write_line(msg)
|
||||
lines = self.config.hook.pytest_report_header(config=self.config)
|
||||
lines = self.config.hook.pytest_report_header(
|
||||
config=self.config, startdir=self.startdir)
|
||||
lines.reverse()
|
||||
for line in flatten(lines):
|
||||
self.write_line(line)
|
||||
|
||||
def pytest_report_header(self, config):
|
||||
plugininfo = config.pluginmanager._plugin_distinfo
|
||||
if plugininfo:
|
||||
l = []
|
||||
for dist, plugin in plugininfo:
|
||||
name = dist.project_name
|
||||
if name.startswith("pytest-"):
|
||||
name = name[7:]
|
||||
l.append(name)
|
||||
return "plugins: %s" % ", ".join(l)
|
||||
|
||||
def pytest_collection_finish(self, session):
|
||||
if self.config.option.collectonly:
|
||||
self._printcollecteditems(session.items)
|
||||
@@ -289,10 +300,18 @@ class TerminalReporter:
|
||||
# we take care to leave out Instances aka ()
|
||||
# because later versions are going to get rid of them anyway
|
||||
if self.config.option.verbose < 0:
|
||||
for item in items:
|
||||
nodeid = item.nodeid
|
||||
nodeid = nodeid.replace("::()::", "::")
|
||||
self._tw.line(nodeid)
|
||||
if self.config.option.verbose < -1:
|
||||
counts = {}
|
||||
for item in items:
|
||||
name = item.nodeid.split('::', 1)[0]
|
||||
counts[name] = counts.get(name, 0) + 1
|
||||
for name, count in sorted(counts.items()):
|
||||
self._tw.line("%s: %d" % (name, count))
|
||||
else:
|
||||
for item in items:
|
||||
nodeid = item.nodeid
|
||||
nodeid = nodeid.replace("::()::", "::")
|
||||
self._tw.line(nodeid)
|
||||
return
|
||||
stack = []
|
||||
indent = ""
|
||||
@@ -393,7 +412,7 @@ class TerminalReporter:
|
||||
else:
|
||||
msg = self._getfailureheadline(rep)
|
||||
self.write_sep("_", msg)
|
||||
rep.toterminal(self._tw)
|
||||
self._outrep_summary(rep)
|
||||
|
||||
def summary_errors(self):
|
||||
if self.config.option.tbstyle != "no":
|
||||
@@ -411,32 +430,49 @@ class TerminalReporter:
|
||||
elif rep.when == "teardown":
|
||||
msg = "ERROR at teardown of " + msg
|
||||
self.write_sep("_", msg)
|
||||
rep.toterminal(self._tw)
|
||||
self._outrep_summary(rep)
|
||||
|
||||
def _outrep_summary(self, rep):
|
||||
rep.toterminal(self._tw)
|
||||
for secname, content in rep.sections:
|
||||
self._tw.sep("-", secname)
|
||||
if content[-1:] == "\n":
|
||||
content = content[:-1]
|
||||
self._tw.line(content)
|
||||
|
||||
def summary_stats(self):
|
||||
session_duration = py.std.time.time() - self._sessionstarttime
|
||||
|
||||
keys = "failed passed skipped deselected".split()
|
||||
keys = "failed passed skipped deselected xfailed xpassed".split()
|
||||
for key in self.stats.keys():
|
||||
if key not in keys:
|
||||
keys.append(key)
|
||||
parts = []
|
||||
for key in keys:
|
||||
val = self.stats.get(key, None)
|
||||
if val:
|
||||
parts.append("%d %s" %(len(val), key))
|
||||
if key: # setup/teardown reports have an empty key, ignore them
|
||||
val = self.stats.get(key, None)
|
||||
if val:
|
||||
parts.append("%d %s" %(len(val), key))
|
||||
line = ", ".join(parts)
|
||||
# XXX coloring
|
||||
msg = "%s in %.2f seconds" %(line, session_duration)
|
||||
if self.verbosity >= 0:
|
||||
self.write_sep("=", msg, bold=True)
|
||||
else:
|
||||
self.write_line(msg, bold=True)
|
||||
#else:
|
||||
# self.write_line(msg, bold=True)
|
||||
|
||||
def summary_deselected(self):
|
||||
if 'deselected' in self.stats:
|
||||
self.write_sep("=", "%d tests deselected by %r" %(
|
||||
len(self.stats['deselected']), self.config.option.keyword), bold=True)
|
||||
l = []
|
||||
k = self.config.option.keyword
|
||||
if k:
|
||||
l.append("-k%s" % k)
|
||||
m = self.config.option.markexpr
|
||||
if m:
|
||||
l.append("-m %r" % m)
|
||||
if l:
|
||||
self.write_sep("=", "%d tests deselected by %r" %(
|
||||
len(self.stats['deselected']), " ".join(l)), bold=True)
|
||||
|
||||
def repr_pythonversion(v=None):
|
||||
if v is None:
|
||||
|
||||
@@ -40,13 +40,13 @@ class TempdirHandler:
|
||||
basetemp.mkdir()
|
||||
else:
|
||||
basetemp = py.path.local.make_numbered_dir(prefix='pytest-')
|
||||
self._basetemp = t = basetemp
|
||||
self._basetemp = t = basetemp.realpath()
|
||||
self.trace("new basetemp", t)
|
||||
return t
|
||||
|
||||
def finish(self):
|
||||
self.trace("finish")
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
mp = monkeypatch()
|
||||
t = TempdirHandler(config)
|
||||
@@ -64,5 +64,5 @@ def pytest_funcarg__tmpdir(request):
|
||||
name = request._pyfuncitem.name
|
||||
name = py.std.re.sub("[\W]", "_", name)
|
||||
x = request.config._tmpdirhandler.mktemp(name, numbered=True)
|
||||
return x.realpath()
|
||||
return x
|
||||
|
||||
|
||||
@@ -2,6 +2,9 @@
|
||||
import pytest, py
|
||||
import sys, pdb
|
||||
|
||||
# for transfering markers
|
||||
from _pytest.python import transfer_markers
|
||||
|
||||
def pytest_pycollect_makeitem(collector, name, obj):
|
||||
unittest = sys.modules.get('unittest')
|
||||
if unittest is None:
|
||||
@@ -17,10 +20,30 @@ def pytest_pycollect_makeitem(collector, name, obj):
|
||||
return UnitTestCase(name, parent=collector)
|
||||
|
||||
class UnitTestCase(pytest.Class):
|
||||
nofuncargs = True # marker for fixturemanger.getfixtureinfo()
|
||||
# to declare that our children do not support funcargs
|
||||
|
||||
def collect(self):
|
||||
self.session._fixturemanager.parsefactories(self, unittest=True)
|
||||
loader = py.std.unittest.TestLoader()
|
||||
module = self.getparent(pytest.Module).obj
|
||||
cls = self.obj
|
||||
foundsomething = False
|
||||
for name in loader.getTestCaseNames(self.obj):
|
||||
x = getattr(self.obj, name)
|
||||
funcobj = getattr(x, 'im_func', x)
|
||||
transfer_markers(funcobj, cls, module)
|
||||
if hasattr(funcobj, 'todo'):
|
||||
pytest.mark.xfail(reason=str(funcobj.todo))(funcobj)
|
||||
yield TestCaseFunction(name, parent=self)
|
||||
foundsomething = True
|
||||
|
||||
if not foundsomething:
|
||||
runtest = getattr(self.obj, 'runTest', None)
|
||||
if runtest is not None:
|
||||
ut = sys.modules.get("twisted.trial.unittest", None)
|
||||
if ut is None or runtest != ut.TestCase.runTest:
|
||||
yield TestCaseFunction('runTest', parent=self)
|
||||
|
||||
def setup(self):
|
||||
meth = getattr(self.obj, 'setUpClass', None)
|
||||
@@ -37,17 +60,17 @@ class UnitTestCase(pytest.Class):
|
||||
class TestCaseFunction(pytest.Function):
|
||||
_excinfo = None
|
||||
|
||||
def __init__(self, name, parent):
|
||||
super(TestCaseFunction, self).__init__(name, parent)
|
||||
if hasattr(self._obj, 'todo'):
|
||||
getattr(self._obj, 'im_func', self._obj).xfail = \
|
||||
pytest.mark.xfail(reason=str(self._obj.todo))
|
||||
|
||||
def setup(self):
|
||||
self._testcase = self.parent.obj(self.name)
|
||||
self._obj = getattr(self._testcase, self.name)
|
||||
if hasattr(self._testcase, 'skip'):
|
||||
pytest.skip(self._testcase.skip)
|
||||
if hasattr(self._obj, 'skip'):
|
||||
pytest.skip(self._obj.skip)
|
||||
if hasattr(self._testcase, 'setup_method'):
|
||||
self._testcase.setup_method(self._obj)
|
||||
if hasattr(self, "_request"):
|
||||
self._request._fillfixtures()
|
||||
|
||||
def teardown(self):
|
||||
if hasattr(self._testcase, 'teardown_method'):
|
||||
@@ -83,28 +106,37 @@ class TestCaseFunction(pytest.Function):
|
||||
self._addexcinfo(rawexcinfo)
|
||||
def addFailure(self, testcase, rawexcinfo):
|
||||
self._addexcinfo(rawexcinfo)
|
||||
|
||||
def addSkip(self, testcase, reason):
|
||||
try:
|
||||
pytest.skip(reason)
|
||||
except pytest.skip.Exception:
|
||||
self._addexcinfo(sys.exc_info())
|
||||
def addExpectedFailure(self, testcase, rawexcinfo, reason):
|
||||
|
||||
def addExpectedFailure(self, testcase, rawexcinfo, reason=""):
|
||||
try:
|
||||
pytest.xfail(str(reason))
|
||||
except pytest.xfail.Exception:
|
||||
self._addexcinfo(sys.exc_info())
|
||||
def addUnexpectedSuccess(self, testcase, reason):
|
||||
pass
|
||||
|
||||
def addUnexpectedSuccess(self, testcase, reason=""):
|
||||
self._unexpectedsuccess = reason
|
||||
|
||||
def addSuccess(self, testcase):
|
||||
pass
|
||||
|
||||
def stopTest(self, testcase):
|
||||
pass
|
||||
|
||||
def runtest(self):
|
||||
self._testcase(result=self)
|
||||
|
||||
def _prunetraceback(self, excinfo):
|
||||
pytest.Function._prunetraceback(self, excinfo)
|
||||
excinfo.traceback = excinfo.traceback.filter(lambda x:not x.frame.f_globals.get('__unittest'))
|
||||
traceback = excinfo.traceback.filter(
|
||||
lambda x:not x.frame.f_globals.get('__unittest'))
|
||||
if traceback:
|
||||
excinfo.traceback = traceback
|
||||
|
||||
@pytest.mark.tryfirst
|
||||
def pytest_runtest_makereport(item, call):
|
||||
@@ -120,14 +152,19 @@ def pytest_runtest_protocol(item, __multicall__):
|
||||
ut = sys.modules['twisted.python.failure']
|
||||
Failure__init__ = ut.Failure.__init__.im_func
|
||||
check_testcase_implements_trial_reporter()
|
||||
def excstore(self, exc_value=None, exc_type=None, exc_tb=None):
|
||||
def excstore(self, exc_value=None, exc_type=None, exc_tb=None,
|
||||
captureVars=None):
|
||||
if exc_value is None:
|
||||
self._rawexcinfo = sys.exc_info()
|
||||
else:
|
||||
if exc_type is None:
|
||||
exc_type = type(exc_value)
|
||||
self._rawexcinfo = (exc_type, exc_value, exc_tb)
|
||||
Failure__init__(self, exc_value, exc_type, exc_tb)
|
||||
try:
|
||||
Failure__init__(self, exc_value, exc_type, exc_tb,
|
||||
captureVars=captureVars)
|
||||
except TypeError:
|
||||
Failure__init__(self, exc_value, exc_type, exc_tb)
|
||||
ut.Failure.__init__ = excstore
|
||||
try:
|
||||
return __multicall__.execute()
|
||||
|
||||
@@ -46,7 +46,7 @@ except ImportError:
|
||||
args = [quote(arg) for arg in args]
|
||||
return os.spawnl(os.P_WAIT, sys.executable, *args) == 0
|
||||
|
||||
DEFAULT_VERSION = "0.6.19"
|
||||
DEFAULT_VERSION = "0.6.27"
|
||||
DEFAULT_URL = "http://pypi.python.org/packages/source/d/distribute/"
|
||||
SETUPTOOLS_FAKED_VERSION = "0.6c11"
|
||||
|
||||
@@ -63,7 +63,7 @@ Description: xxx
|
||||
""" % SETUPTOOLS_FAKED_VERSION
|
||||
|
||||
|
||||
def _install(tarball):
|
||||
def _install(tarball, install_args=()):
|
||||
# extracting the tarball
|
||||
tmpdir = tempfile.mkdtemp()
|
||||
log.warn('Extracting in %s', tmpdir)
|
||||
@@ -81,7 +81,7 @@ def _install(tarball):
|
||||
|
||||
# installing
|
||||
log.warn('Installing Distribute')
|
||||
if not _python_cmd('setup.py', 'install'):
|
||||
if not _python_cmd('setup.py', 'install', *install_args):
|
||||
log.warn('Something went wrong during the installation.')
|
||||
log.warn('See the error message above.')
|
||||
finally:
|
||||
@@ -306,6 +306,9 @@ def _create_fake_setuptools_pkg_info(placeholder):
|
||||
log.warn('%s already exists', pkg_info)
|
||||
return
|
||||
|
||||
if not os.access(pkg_info, os.W_OK):
|
||||
log.warn("Don't have permissions to write %s, skipping", pkg_info)
|
||||
|
||||
log.warn('Creating %s', pkg_info)
|
||||
f = open(pkg_info, 'w')
|
||||
try:
|
||||
@@ -474,11 +477,20 @@ def _extractall(self, path=".", members=None):
|
||||
else:
|
||||
self._dbg(1, "tarfile: %s" % e)
|
||||
|
||||
def _build_install_args(argv):
|
||||
install_args = []
|
||||
user_install = '--user' in argv
|
||||
if user_install and sys.version_info < (2,6):
|
||||
log.warn("--user requires Python 2.6 or later")
|
||||
raise SystemExit(1)
|
||||
if user_install:
|
||||
install_args.append('--user')
|
||||
return install_args
|
||||
|
||||
def main(argv, version=DEFAULT_VERSION):
|
||||
"""Install or upgrade setuptools and EasyInstall"""
|
||||
tarball = download_setuptools()
|
||||
_install(tarball)
|
||||
_install(tarball, _build_install_args(argv))
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
||||
144
doc/en/Makefile
Normal file
144
doc/en/Makefile
Normal file
@@ -0,0 +1,144 @@
|
||||
# Makefile for Sphinx documentation
|
||||
#
|
||||
|
||||
# You can set these variables from the command line.
|
||||
SPHINXOPTS =
|
||||
SPHINXBUILD = sphinx-build
|
||||
PAPER =
|
||||
BUILDDIR = _build
|
||||
|
||||
# Internal variables.
|
||||
PAPEROPT_a4 = -D latex_paper_size=a4
|
||||
PAPEROPT_letter = -D latex_paper_size=letter
|
||||
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
|
||||
|
||||
SITETARGET=latest
|
||||
|
||||
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest
|
||||
|
||||
regen:
|
||||
PYTHONDONTWRITEBYTECODE=1 COLUMNS=76 regendoc --update *.txt */*.txt
|
||||
|
||||
help:
|
||||
@echo "Please use \`make <target>' where <target> is one of"
|
||||
@echo " html to make standalone HTML files"
|
||||
@echo " dirhtml to make HTML files named index.html in directories"
|
||||
@echo " singlehtml to make a single large HTML file"
|
||||
@echo " pickle to make pickle files"
|
||||
@echo " json to make JSON files"
|
||||
@echo " htmlhelp to make HTML files and a HTML help project"
|
||||
@echo " qthelp to make HTML files and a qthelp project"
|
||||
@echo " devhelp to make HTML files and a Devhelp project"
|
||||
@echo " epub to make an epub"
|
||||
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
|
||||
@echo " latexpdf to make LaTeX files and run them through pdflatex"
|
||||
@echo " text to make text files"
|
||||
@echo " man to make manual pages"
|
||||
@echo " changes to make an overview of all changed/added/deprecated items"
|
||||
@echo " linkcheck to check all external links for integrity"
|
||||
@echo " doctest to run all doctests embedded in the documentation (if enabled)"
|
||||
|
||||
clean:
|
||||
-rm -rf $(BUILDDIR)/*
|
||||
|
||||
install: html
|
||||
rsync -avz _build/html/ pytest.org:/www/pytest.org/$(SITETARGET)
|
||||
|
||||
installpdf: latexpdf
|
||||
@scp $(BUILDDIR)/latex/pytest.pdf pytest.org:/www/pytest.org/$(SITETARGET)
|
||||
|
||||
installall: clean install installpdf
|
||||
@echo "done"
|
||||
|
||||
html:
|
||||
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/html."
|
||||
|
||||
dirhtml:
|
||||
$(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml
|
||||
@echo
|
||||
@echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml."
|
||||
|
||||
singlehtml:
|
||||
$(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml
|
||||
@echo
|
||||
@echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml."
|
||||
|
||||
pickle:
|
||||
$(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle
|
||||
@echo
|
||||
@echo "Build finished; now you can process the pickle files."
|
||||
|
||||
json:
|
||||
$(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json
|
||||
@echo
|
||||
@echo "Build finished; now you can process the JSON files."
|
||||
|
||||
htmlhelp:
|
||||
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp
|
||||
@echo
|
||||
@echo "Build finished; now you can run HTML Help Workshop with the" \
|
||||
".hhp project file in $(BUILDDIR)/htmlhelp."
|
||||
|
||||
qthelp:
|
||||
$(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp
|
||||
@echo
|
||||
@echo "Build finished; now you can run "qcollectiongenerator" with the" \
|
||||
".qhcp project file in $(BUILDDIR)/qthelp, like this:"
|
||||
@echo "# qcollectiongenerator $(BUILDDIR)/qthelp/pytest.qhcp"
|
||||
@echo "To view the help file:"
|
||||
@echo "# assistant -collectionFile $(BUILDDIR)/qthelp/pytest.qhc"
|
||||
|
||||
devhelp:
|
||||
$(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp
|
||||
@echo
|
||||
@echo "Build finished."
|
||||
@echo "To view the help file:"
|
||||
@echo "# mkdir -p $$HOME/.local/share/devhelp/pytest"
|
||||
@echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/pytest"
|
||||
@echo "# devhelp"
|
||||
|
||||
epub:
|
||||
$(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub
|
||||
@echo
|
||||
@echo "Build finished. The epub file is in $(BUILDDIR)/epub."
|
||||
|
||||
latex:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo
|
||||
@echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex."
|
||||
@echo "Run \`make' in that directory to run these through (pdf)latex" \
|
||||
"(use \`make latexpdf' here to do that automatically)."
|
||||
|
||||
latexpdf:
|
||||
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex
|
||||
@echo "Running LaTeX files through pdflatex..."
|
||||
make -C $(BUILDDIR)/latex all-pdf
|
||||
@echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex."
|
||||
|
||||
text:
|
||||
$(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text
|
||||
@echo
|
||||
@echo "Build finished. The text files are in $(BUILDDIR)/text."
|
||||
|
||||
man:
|
||||
$(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man
|
||||
@echo
|
||||
@echo "Build finished. The manual pages are in $(BUILDDIR)/man."
|
||||
|
||||
changes:
|
||||
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes
|
||||
@echo
|
||||
@echo "The overview file is in $(BUILDDIR)/changes."
|
||||
|
||||
linkcheck:
|
||||
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck
|
||||
@echo
|
||||
@echo "Link check complete; look for any errors in the above output " \
|
||||
"or in $(BUILDDIR)/linkcheck/output.txt."
|
||||
|
||||
doctest:
|
||||
$(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest
|
||||
@echo "Testing of doctests in the sources finished, look at the " \
|
||||
"results in $(BUILDDIR)/doctest/output.txt."
|
||||
@@ -1,5 +1,9 @@
|
||||
{% extends "!layout.html" %}
|
||||
|
||||
{% block relbaritems %}
|
||||
{{ super() }}
|
||||
<g:plusone></g:plusone>
|
||||
{% endblock %}
|
||||
|
||||
{% block footer %}
|
||||
{{ super() }}
|
||||
@@ -16,4 +20,5 @@
|
||||
})();
|
||||
|
||||
</script>
|
||||
<script type="text/javascript" src="https://apis.google.com/js/plusone.js"></script>
|
||||
{% endblock %}
|
||||
@@ -18,7 +18,7 @@
|
||||
<td>
|
||||
<a href="{{ pathto('index') }}">home</a>
|
||||
</td><td>
|
||||
<a href="{{ pathto('contents') }}">contents</a>
|
||||
<a href="{{ pathto('contents') }}">TOC/contents</a>
|
||||
</td></tr><tr><td>
|
||||
<a href="{{ pathto('getting-started') }}">install</a>
|
||||
</td><td>
|
||||
@@ -5,6 +5,16 @@ Release announcements
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
release-2.3.2
|
||||
release-2.3.1
|
||||
release-2.3.0
|
||||
release-2.2.4
|
||||
release-2.2.2
|
||||
release-2.2.1
|
||||
release-2.2.0
|
||||
release-2.1.3
|
||||
release-2.1.2
|
||||
release-2.1.1
|
||||
release-2.1.0
|
||||
release-2.0.3
|
||||
release-2.0.2
|
||||
67
doc/en/announce/release-2.0.1.txt
Normal file
67
doc/en/announce/release-2.0.1.txt
Normal file
@@ -0,0 +1,67 @@
|
||||
py.test 2.0.1: bug fixes
|
||||
===========================================================================
|
||||
|
||||
Welcome to pytest-2.0.1, a maintenance and bug fix release of pytest,
|
||||
a mature testing tool for Python, supporting CPython 2.4-3.2, Jython
|
||||
and latest PyPy interpreters. See extensive docs with tested examples here:
|
||||
|
||||
http://pytest.org/
|
||||
|
||||
If you want to install or upgrade pytest, just type one of::
|
||||
|
||||
pip install -U pytest # or
|
||||
easy_install -U pytest
|
||||
|
||||
Many thanks to all issue reporters and people asking questions or
|
||||
complaining. Particular thanks to Floris Bruynooghe and Ronny Pfannschmidt
|
||||
for their great coding contributions and many others for feedback and help.
|
||||
|
||||
best,
|
||||
holger krekel
|
||||
|
||||
Changes between 2.0.0 and 2.0.1
|
||||
----------------------------------------------
|
||||
|
||||
- refine and unify initial capturing so that it works nicely
|
||||
even if the logging module is used on an early-loaded conftest.py
|
||||
file or plugin.
|
||||
- fix issue12 - show plugin versions with "--version" and
|
||||
"--traceconfig" and also document how to add extra information
|
||||
to reporting test header
|
||||
- fix issue17 (import-* reporting issue on python3) by
|
||||
requiring py>1.4.0 (1.4.1 is going to include it)
|
||||
- fix issue10 (numpy arrays truth checking) by refining
|
||||
assertion interpretation in py lib
|
||||
- fix issue15: make nose compatibility tests compatible
|
||||
with python3 (now that nose-1.0 supports python3)
|
||||
- remove somewhat surprising "same-conftest" detection because
|
||||
it ignores conftest.py when they appear in several subdirs.
|
||||
- improve assertions ("not in"), thanks Floris Bruynooghe
|
||||
- improve behaviour/warnings when running on top of "python -OO"
|
||||
(assertions and docstrings are turned off, leading to potential
|
||||
false positives)
|
||||
- introduce a pytest_cmdline_processargs(args) hook
|
||||
to allow dynamic computation of command line arguments.
|
||||
This fixes a regression because py.test prior to 2.0
|
||||
allowed to set command line options from conftest.py
|
||||
files which so far pytest-2.0 only allowed from ini-files now.
|
||||
- fix issue7: assert failures in doctest modules.
|
||||
unexpected failures in doctests will not generally
|
||||
show nicer, i.e. within the doctest failing context.
|
||||
- fix issue9: setup/teardown functions for an xfail-marked
|
||||
test will report as xfail if they fail but report as normally
|
||||
passing (not xpassing) if they succeed. This only is true
|
||||
for "direct" setup/teardown invocations because teardown_class/
|
||||
teardown_module cannot closely relate to a single test.
|
||||
- fix issue14: no logging errors at process exit
|
||||
- refinements to "collecting" output on non-ttys
|
||||
- refine internal plugin registration and --traceconfig output
|
||||
- introduce a mechanism to prevent/unregister plugins from the
|
||||
command line, see http://pytest.org/latest/plugins.html#cmdunregister
|
||||
- activate resultlog plugin by default
|
||||
- fix regression wrt yielded tests which due to the
|
||||
collection-before-running semantics were not
|
||||
setup as with pytest 1.3.4. Note, however, that
|
||||
the recommended and much cleaner way to do test
|
||||
parametrization remains the "pytest_generate_tests"
|
||||
mechanism, see the docs.
|
||||
@@ -1,4 +1,4 @@
|
||||
py.test 2.0.2: bug fixes, improved xfail/skip expressions, speedups
|
||||
py.test 2.0.2: bug fixes, improved xfail/skip expressions, speed ups
|
||||
===========================================================================
|
||||
|
||||
Welcome to pytest-2.0.2, a maintenance and bug fix release of pytest,
|
||||
@@ -32,17 +32,17 @@ Changes between 2.0.1 and 2.0.2
|
||||
|
||||
Also you can now access module globals from xfail/skipif
|
||||
expressions so that this for example works now::
|
||||
|
||||
|
||||
import pytest
|
||||
import mymodule
|
||||
@pytest.mark.skipif("mymodule.__version__[0] == "1")
|
||||
def test_function():
|
||||
pass
|
||||
|
||||
This will not run the test function if the module's version string
|
||||
This will not run the test function if the module's version string
|
||||
does not start with a "1". Note that specifying a string instead
|
||||
of a boolean expressions allows py.test to report meaningful information
|
||||
when summarizing a test run as to what conditions lead to skipping
|
||||
of a boolean expressions allows py.test to report meaningful information
|
||||
when summarizing a test run as to what conditions lead to skipping
|
||||
(or xfail-ing) tests.
|
||||
|
||||
- fix issue28 - setup_method and pytest_generate_tests work together
|
||||
@@ -65,7 +65,7 @@ Changes between 2.0.1 and 2.0.2
|
||||
- fixed typos in the docs (thanks Victor Garcia, Brianna Laugher) and particular
|
||||
thanks to Laura Creighton who also revieved parts of the documentation.
|
||||
|
||||
- fix slighly wrong output of verbose progress reporting for classes
|
||||
- fix slighly wrong output of verbose progress reporting for classes
|
||||
(thanks Amaury)
|
||||
|
||||
- more precise (avoiding of) deprecation warnings for node.Class|Function accesses
|
||||
@@ -1,7 +1,7 @@
|
||||
py.test 2.1.0: perfected assertions and bug fixes
|
||||
===========================================================================
|
||||
|
||||
Welcome to the relase of pytest-2.1, a mature testing tool for Python,
|
||||
Welcome to the release of pytest-2.1, a mature testing tool for Python,
|
||||
supporting CPython 2.4-3.2, Jython and latest PyPy interpreters. See
|
||||
the improved extensive docs (now also as PDF!) with tested examples here:
|
||||
|
||||
@@ -15,7 +15,7 @@ assert statements in test modules upon import, using a PEP302 hook.
|
||||
See http://pytest.org/assert.html#advanced-assertion-introspection for
|
||||
detailed information. The work has been partly sponsored by my company,
|
||||
merlinux GmbH.
|
||||
|
||||
|
||||
For further details on bug fixes and smaller enhancements see below.
|
||||
|
||||
If you want to install or upgrade pytest, just type one of::
|
||||
@@ -39,10 +39,9 @@ Changes between 2.0.3 and 2.1.0
|
||||
unexpected exceptions
|
||||
- fix issue47: timing output in junitxml for test cases is now correct
|
||||
- fix issue48: typo in MarkInfo repr leading to exception
|
||||
- fix issue49: avoid confusing error when initizaliation partially fails
|
||||
- fix issue49: avoid confusing error when initialization partially fails
|
||||
- fix issue44: env/username expansion for junitxml file path
|
||||
- show releaselevel information in test runs for pypy
|
||||
- reworked doc pages for better navigation and PDF generation
|
||||
- report KeyboardInterrupt even if interrupted during session startup
|
||||
- fix issue 35 - provide PDF doc version and download link from index page
|
||||
|
||||
37
doc/en/announce/release-2.1.1.txt
Normal file
37
doc/en/announce/release-2.1.1.txt
Normal file
@@ -0,0 +1,37 @@
|
||||
py.test 2.1.1: assertion fixes and improved junitxml output
|
||||
===========================================================================
|
||||
|
||||
pytest-2.1.1 is a backward compatible maintenance release of the
|
||||
popular py.test testing tool. See extensive docs with examples here:
|
||||
|
||||
http://pytest.org/
|
||||
|
||||
Most bug fixes address remaining issues with the perfected assertions
|
||||
introduced with 2.1.0 - many thanks to the bug reporters and to Benjamin
|
||||
Peterson for helping to fix them. Also, junitxml output now produces
|
||||
system-out/err tags which lead to better displays of tracebacks with Jenkins.
|
||||
|
||||
Also a quick note to package maintainers and others interested: there now
|
||||
is a "pytest" man page which can be generated with "make man" in doc/.
|
||||
|
||||
If you want to install or upgrade pytest, just type one of::
|
||||
|
||||
pip install -U pytest # or
|
||||
easy_install -U pytest
|
||||
|
||||
best,
|
||||
holger krekel / http://merlinux.eu
|
||||
|
||||
Changes between 2.1.0 and 2.1.1
|
||||
----------------------------------------------
|
||||
|
||||
- fix issue64 / pytest.set_trace now works within pytest_generate_tests hooks
|
||||
- fix issue60 / fix error conditions involving the creation of __pycache__
|
||||
- fix issue63 / assertion rewriting on inserts involving strings containing '%'
|
||||
- fix assertion rewriting on calls with a ** arg
|
||||
- don't cache rewritten modules if bytecode generation is disabled
|
||||
- fix assertion rewriting in read-only directories
|
||||
- fix issue59: provide system-out/err tags for junitxml output
|
||||
- fix issue61: assertion rewriting on boolean operations with 3 or more operands
|
||||
- you can now build a man page with "cd doc ; make man"
|
||||
|
||||
33
doc/en/announce/release-2.1.2.txt
Normal file
33
doc/en/announce/release-2.1.2.txt
Normal file
@@ -0,0 +1,33 @@
|
||||
py.test 2.1.2: bug fixes and fixes for jython
|
||||
===========================================================================
|
||||
|
||||
pytest-2.1.2 is a minor backward compatible maintenance release of the
|
||||
popular py.test testing tool. pytest is commonly used for unit,
|
||||
functional- and integration testing. See extensive docs with examples
|
||||
here:
|
||||
|
||||
http://pytest.org/
|
||||
|
||||
Most bug fixes address remaining issues with the perfected assertions
|
||||
introduced in the 2.1 series - many thanks to the bug reporters and to Benjamin
|
||||
Peterson for helping to fix them. pytest should also work better with
|
||||
Jython-2.5.1 (and Jython trunk).
|
||||
|
||||
If you want to install or upgrade pytest, just type one of::
|
||||
|
||||
pip install -U pytest # or
|
||||
easy_install -U pytest
|
||||
|
||||
best,
|
||||
holger krekel / http://merlinux.eu
|
||||
|
||||
Changes between 2.1.1 and 2.1.2
|
||||
----------------------------------------
|
||||
|
||||
- fix assertion rewriting on files with windows newlines on some Python versions
|
||||
- refine test discovery by package/module name (--pyargs), thanks Florian Mayer
|
||||
- fix issue69 / assertion rewriting fixed on some boolean operations
|
||||
- fix issue68 / packages now work with assertion rewriting
|
||||
- fix issue66: use different assertion rewriting caches when the -O option is passed
|
||||
- don't try assertion rewriting on Jython, use reinterp
|
||||
|
||||
32
doc/en/announce/release-2.1.3.txt
Normal file
32
doc/en/announce/release-2.1.3.txt
Normal file
@@ -0,0 +1,32 @@
|
||||
py.test 2.1.3: just some more fixes
|
||||
===========================================================================
|
||||
|
||||
pytest-2.1.3 is a minor backward compatible maintenance release of the
|
||||
popular py.test testing tool. It is commonly used for unit, functional-
|
||||
and integration testing. See extensive docs with examples here:
|
||||
|
||||
http://pytest.org/
|
||||
|
||||
The release contains another fix to the perfected assertions introduced
|
||||
with the 2.1 series as well as the new possibility to customize reporting
|
||||
for assertion expressions on a per-directory level.
|
||||
|
||||
If you want to install or upgrade pytest, just type one of::
|
||||
|
||||
pip install -U pytest # or
|
||||
easy_install -U pytest
|
||||
|
||||
Thanks to the bug reporters and to Ronny Pfannschmidt, Benjamin Peterson
|
||||
and Floris Bruynooghe who implemented the fixes.
|
||||
|
||||
best,
|
||||
holger krekel
|
||||
|
||||
Changes between 2.1.2 and 2.1.3
|
||||
----------------------------------------
|
||||
|
||||
- fix issue79: assertion rewriting failed on some comparisons in boolops,
|
||||
- correctly handle zero length arguments (a la pytest '')
|
||||
- fix issue67 / junitxml now contains correct test durations
|
||||
- fix issue75 / skipping test failure on jython
|
||||
- fix issue77 / Allow assertrepr_compare hook to apply to a subset of tests
|
||||
95
doc/en/announce/release-2.2.0.txt
Normal file
95
doc/en/announce/release-2.2.0.txt
Normal file
@@ -0,0 +1,95 @@
|
||||
py.test 2.2.0: test marking++, parametrization++ and duration profiling
|
||||
===========================================================================
|
||||
|
||||
pytest-2.2.0 is a test-suite compatible release of the popular
|
||||
py.test testing tool. Plugins might need upgrades. It comes
|
||||
with these improvements:
|
||||
|
||||
* easier and more powerful parametrization of tests:
|
||||
|
||||
- new @pytest.mark.parametrize decorator to run tests with different arguments
|
||||
- new metafunc.parametrize() API for parametrizing arguments independently
|
||||
- see examples at http://pytest.org/latest/example/parametrize.html
|
||||
- NOTE that parametrize() related APIs are still a bit experimental
|
||||
and might change in future releases.
|
||||
|
||||
* improved handling of test markers and refined marking mechanism:
|
||||
|
||||
- "-m markexpr" option for selecting tests according to their mark
|
||||
- a new "markers" ini-variable for registering test markers for your project
|
||||
- the new "--strict" bails out with an error if using unregistered markers.
|
||||
- see examples at http://pytest.org/latest/example/markers.html
|
||||
|
||||
* duration profiling: new "--duration=N" option showing the N slowest test
|
||||
execution or setup/teardown calls. This is most useful if you want to
|
||||
find out where your slowest test code is.
|
||||
|
||||
* also 2.2.0 performs more eager calling of teardown/finalizers functions
|
||||
resulting in better and more accurate reporting when they fail
|
||||
|
||||
Besides there is the usual set of bug fixes along with a cleanup of
|
||||
pytest's own test suite allowing it to run on a wider range of environments.
|
||||
|
||||
For general information, see extensive docs with examples here:
|
||||
|
||||
http://pytest.org/
|
||||
|
||||
If you want to install or upgrade pytest you might just type::
|
||||
|
||||
pip install -U pytest # or
|
||||
easy_install -U pytest
|
||||
|
||||
Thanks to Ronny Pfannschmidt, David Burns, Jeff Donner, Daniel Nouri, Alfredo Deza and all who gave feedback or sent bug reports.
|
||||
|
||||
best,
|
||||
holger krekel
|
||||
|
||||
|
||||
notes on incompatibility
|
||||
------------------------------
|
||||
|
||||
While test suites should work unchanged you might need to upgrade plugins:
|
||||
|
||||
* You need a new version of the pytest-xdist plugin (1.7) for distributing
|
||||
test runs.
|
||||
|
||||
* Other plugins might need an upgrade if they implement
|
||||
the ``pytest_runtest_logreport`` hook which now is called unconditionally
|
||||
for the setup/teardown fixture phases of a test. You may choose to
|
||||
ignore setup/teardown failures by inserting "if rep.when != 'call': return"
|
||||
or something similar. Note that most code probably "just" works because
|
||||
the hook was already called for failing setup/teardown phases of a test
|
||||
so a plugin should have been ready to grok such reports already.
|
||||
|
||||
|
||||
Changes between 2.1.3 and 2.2.0
|
||||
----------------------------------------
|
||||
|
||||
- fix issue90: introduce eager tearing down of test items so that
|
||||
teardown function are called earlier.
|
||||
- add an all-powerful metafunc.parametrize function which allows to
|
||||
parametrize test function arguments in multiple steps and therefore
|
||||
from independent plugins and places.
|
||||
- add a @pytest.mark.parametrize helper which allows to easily
|
||||
call a test function with different argument values.
|
||||
- Add examples to the "parametrize" example page, including a quick port
|
||||
of Test scenarios and the new parametrize function and decorator.
|
||||
- introduce registration for "pytest.mark.*" helpers via ini-files
|
||||
or through plugin hooks. Also introduce a "--strict" option which
|
||||
will treat unregistered markers as errors
|
||||
allowing to avoid typos and maintain a well described set of markers
|
||||
for your test suite. See examples at http://pytest.org/latest/mark.html
|
||||
and its links.
|
||||
- issue50: introduce "-m marker" option to select tests based on markers
|
||||
(this is a stricter and more predictable version of "-k" in that "-m"
|
||||
only matches complete markers and has more obvious rules for and/or
|
||||
semantics.
|
||||
- new feature to help optimizing the speed of your tests:
|
||||
--durations=N option for displaying N slowest test calls
|
||||
and setup/teardown methods.
|
||||
- fix issue87: --pastebin now works with python3
|
||||
- fix issue89: --pdb with unexpected exceptions in doctest work more sensibly
|
||||
- fix and cleanup pytest's own test suite to not leak FDs
|
||||
- fix issue83: link to generated funcarg list
|
||||
- fix issue74: pyarg module names are now checked against imp.find_module false positives
|
||||
- fix compatibility with twisted/trial-11.1.0 use cases
|
||||
41
doc/en/announce/release-2.2.1.txt
Normal file
41
doc/en/announce/release-2.2.1.txt
Normal file
@@ -0,0 +1,41 @@
|
||||
pytest-2.2.1: bug fixes, perfect teardowns
|
||||
===========================================================================
|
||||
|
||||
|
||||
pytest-2.2.1 is a minor backward-compatible release of the the py.test
|
||||
testing tool. It contains bug fixes and little improvements, including
|
||||
documentation fixes. If you are using the distributed testing
|
||||
pluginmake sure to upgrade it to pytest-xdist-1.8.
|
||||
|
||||
For general information see here:
|
||||
|
||||
http://pytest.org/
|
||||
|
||||
To install or upgrade pytest:
|
||||
|
||||
pip install -U pytest # or
|
||||
easy_install -U pytest
|
||||
|
||||
Special thanks for helping on this release to Ronny Pfannschmidt, Jurko
|
||||
Gospodnetic and Ralf Schmitt.
|
||||
|
||||
best,
|
||||
holger krekel
|
||||
|
||||
|
||||
Changes between 2.2.0 and 2.2.1
|
||||
----------------------------------------
|
||||
|
||||
- fix issue99 (in pytest and py) internallerrors with resultlog now
|
||||
produce better output - fixed by normalizing pytest_internalerror
|
||||
input arguments.
|
||||
- fix issue97 / traceback issues (in pytest and py) improve traceback output
|
||||
in conjunction with jinja2 and cython which hack tracebacks
|
||||
- fix issue93 (in pytest and pytest-xdist) avoid "delayed teardowns":
|
||||
the final test in a test node will now run its teardown directly
|
||||
instead of waiting for the end of the session. Thanks Dave Hunt for
|
||||
the good reporting and feedback. The pytest_runtest_protocol as well
|
||||
as the pytest_runtest_teardown hooks now have "nextitem" available
|
||||
which will be None indicating the end of the test run.
|
||||
- fix collection crash due to unknown-source collected items, thanks
|
||||
to Ralf Schmitt (fixed by depending on a more recent pylib)
|
||||
43
doc/en/announce/release-2.2.2.txt
Normal file
43
doc/en/announce/release-2.2.2.txt
Normal file
@@ -0,0 +1,43 @@
|
||||
pytest-2.2.2: bug fixes
|
||||
===========================================================================
|
||||
|
||||
pytest-2.2.2 (updated to 2.2.3 to fix packaging issues) is a minor
|
||||
backward-compatible release of the versatile py.test testing tool. It
|
||||
contains bug fixes and a few refinements particularly to reporting with
|
||||
"--collectonly", see below for betails.
|
||||
|
||||
For general information see here:
|
||||
|
||||
http://pytest.org/
|
||||
|
||||
To install or upgrade pytest:
|
||||
|
||||
pip install -U pytest # or
|
||||
easy_install -U pytest
|
||||
|
||||
Special thanks for helping on this release to Ronny Pfannschmidt
|
||||
and Ralf Schmitt and the contributors of issues.
|
||||
|
||||
best,
|
||||
holger krekel
|
||||
|
||||
|
||||
Changes between 2.2.1 and 2.2.2
|
||||
----------------------------------------
|
||||
|
||||
- fix issue101: wrong args to unittest.TestCase test function now
|
||||
produce better output
|
||||
- fix issue102: report more useful errors and hints for when a
|
||||
test directory was renamed and some pyc/__pycache__ remain
|
||||
- fix issue106: allow parametrize to be applied multiple times
|
||||
e.g. from module, class and at function level.
|
||||
- fix issue107: actually perform session scope finalization
|
||||
- don't check in parametrize if indirect parameters are funcarg names
|
||||
- add chdir method to monkeypatch funcarg
|
||||
- fix crash resulting from calling monkeypatch undo a second time
|
||||
- fix issue115: make --collectonly robust against early failure
|
||||
(missing files/directories)
|
||||
- "-qq --collectonly" now shows only files and the number of tests in them
|
||||
- "-q --collectonly" now shows test ids
|
||||
- allow adding of attributes to test reports such that it also works
|
||||
with distributed testing (no upgrade of pytest-xdist needed)
|
||||
39
doc/en/announce/release-2.2.4.txt
Normal file
39
doc/en/announce/release-2.2.4.txt
Normal file
@@ -0,0 +1,39 @@
|
||||
pytest-2.2.4: bug fixes, better junitxml/unittest/python3 compat
|
||||
===========================================================================
|
||||
|
||||
pytest-2.2.4 is a minor backward-compatible release of the versatile
|
||||
py.test testing tool. It contains bug fixes and a few refinements
|
||||
to junitxml reporting, better unittest- and python3 compatibility.
|
||||
|
||||
For general information see here:
|
||||
|
||||
http://pytest.org/
|
||||
|
||||
To install or upgrade pytest:
|
||||
|
||||
pip install -U pytest # or
|
||||
easy_install -U pytest
|
||||
|
||||
Special thanks for helping on this release to Ronny Pfannschmidt
|
||||
and Benjamin Peterson and the contributors of issues.
|
||||
|
||||
best,
|
||||
holger krekel
|
||||
|
||||
Changes between 2.2.3 and 2.2.4
|
||||
-----------------------------------
|
||||
|
||||
- fix error message for rewritten assertions involving the % operator
|
||||
- fix issue 126: correctly match all invalid xml characters for junitxml
|
||||
binary escape
|
||||
- fix issue with unittest: now @unittest.expectedFailure markers should
|
||||
be processed correctly (you can also use @pytest.mark markers)
|
||||
- document integration with the extended distribute/setuptools test commands
|
||||
- fix issue 140: propperly get the real functions
|
||||
of bound classmethods for setup/teardown_class
|
||||
- fix issue #141: switch from the deceased paste.pocoo.org to bpaste.net
|
||||
- fix issue #143: call unconfigure/sessionfinish always when
|
||||
configure/sessionstart where called
|
||||
- fix issue #144: better mangle test ids to junitxml classnames
|
||||
- upgrade distribute_setup.py to 0.6.27
|
||||
|
||||
134
doc/en/announce/release-2.3.0.txt
Normal file
134
doc/en/announce/release-2.3.0.txt
Normal file
@@ -0,0 +1,134 @@
|
||||
pytest-2.3: improved fixtures / better unittest integration
|
||||
=============================================================================
|
||||
|
||||
pytest-2.3 comes with many major improvements for fixture/funcarg management
|
||||
and parametrized testing in Python. It is now easier, more efficient and
|
||||
more predicatable to re-run the same tests with different fixture
|
||||
instances. Also, you can directly declare the caching "scope" of
|
||||
fixtures so that dependent tests throughout your whole test suite can
|
||||
re-use database or other expensive fixture objects with ease. Lastly,
|
||||
it's possible for fixture functions (formerly known as funcarg
|
||||
factories) to use other fixtures, allowing for a completely modular and
|
||||
re-useable fixture design.
|
||||
|
||||
For detailed info and tutorial-style examples, see:
|
||||
|
||||
http://pytest.org/latest/fixture.html
|
||||
|
||||
Moreover, there is now support for using pytest fixtures/funcargs with
|
||||
unittest-style suites, see here for examples:
|
||||
|
||||
http://pytest.org/latest/unittest.html
|
||||
|
||||
Besides, more unittest-test suites are now expected to "simply work"
|
||||
with pytest.
|
||||
|
||||
All changes are backward compatible and you should be able to continue
|
||||
to run your test suites and 3rd party plugins that worked with
|
||||
pytest-2.2.4.
|
||||
|
||||
If you are interested in the precise reasoning (including examples) of the
|
||||
pytest-2.3 fixture evolution, please consult
|
||||
http://pytest.org/latest/funcarg_compare.html
|
||||
|
||||
For general info on installation and getting started:
|
||||
|
||||
http://pytest.org/latest/getting-started.html
|
||||
|
||||
Docs and PDF access as usual at:
|
||||
|
||||
http://pytest.org
|
||||
|
||||
and more details for those already in the knowing of pytest can be found
|
||||
in the CHANGELOG below.
|
||||
|
||||
Particular thanks for this release go to Floris Bruynooghe, Alex Okrushko
|
||||
Carl Meyer, Ronny Pfannschmidt, Benjamin Peterson and Alex Gaynor for helping
|
||||
to get the new features right and well integrated. Ronny and Floris
|
||||
also helped to fix a number of bugs and yet more people helped by
|
||||
providing bug reports.
|
||||
|
||||
have fun,
|
||||
holger krekel
|
||||
|
||||
|
||||
Changes between 2.2.4 and 2.3.0
|
||||
-----------------------------------
|
||||
|
||||
- fix issue202 - better automatic names for parametrized test functions
|
||||
- fix issue139 - introduce @pytest.fixture which allows direct scoping
|
||||
and parametrization of funcarg factories. Introduce new @pytest.setup
|
||||
marker to allow the writing of setup functions which accept funcargs.
|
||||
- fix issue198 - conftest fixtures were not found on windows32 in some
|
||||
circumstances with nested directory structures due to path manipulation issues
|
||||
- fix issue193 skip test functions with were parametrized with empty
|
||||
parameter sets
|
||||
- fix python3.3 compat, mostly reporting bits that previously depended
|
||||
on dict ordering
|
||||
- introduce re-ordering of tests by resource and parametrization setup
|
||||
which takes precedence to the usual file-ordering
|
||||
- fix issue185 monkeypatching time.time does not cause pytest to fail
|
||||
- fix issue172 duplicate call of pytest.setup-decoratored setup_module
|
||||
functions
|
||||
- fix junitxml=path construction so that if tests change the
|
||||
current working directory and the path is a relative path
|
||||
it is constructed correctly from the original current working dir.
|
||||
- fix "python setup.py test" example to cause a proper "errno" return
|
||||
- fix issue165 - fix broken doc links and mention stackoverflow for FAQ
|
||||
- catch unicode-issues when writing failure representations
|
||||
to terminal to prevent the whole session from crashing
|
||||
- fix xfail/skip confusion: a skip-mark or an imperative pytest.skip
|
||||
will now take precedence before xfail-markers because we
|
||||
can't determine xfail/xpass status in case of a skip. see also:
|
||||
http://stackoverflow.com/questions/11105828/in-py-test-when-i-explicitly-skip-a-test-that-is-marked-as-xfail-how-can-i-get
|
||||
|
||||
- always report installed 3rd party plugins in the header of a test run
|
||||
|
||||
- fix issue160: a failing setup of an xfail-marked tests should
|
||||
be reported as xfail (not xpass)
|
||||
|
||||
- fix issue128: show captured output when capsys/capfd are used
|
||||
|
||||
- fix issue179: propperly show the dependency chain of factories
|
||||
|
||||
- pluginmanager.register(...) now raises ValueError if the
|
||||
plugin has been already registered or the name is taken
|
||||
|
||||
- fix issue159: improve http://pytest.org/latest/faq.html
|
||||
especially with respect to the "magic" history, also mention
|
||||
pytest-django, trial and unittest integration.
|
||||
|
||||
- make request.keywords and node.keywords writable. All descendant
|
||||
collection nodes will see keyword values. Keywords are dictionaries
|
||||
containing markers and other info.
|
||||
|
||||
- fix issue 178: xml binary escapes are now wrapped in py.xml.raw
|
||||
|
||||
- fix issue 176: correctly catch the builtin AssertionError
|
||||
even when we replaced AssertionError with a subclass on the
|
||||
python level
|
||||
|
||||
- factory discovery no longer fails with magic global callables
|
||||
that provide no sane __code__ object (mock.call for example)
|
||||
|
||||
- fix issue 182: testdir.inprocess_run now considers passed plugins
|
||||
|
||||
- fix issue 188: ensure sys.exc_info is clear on python2
|
||||
before calling into a test
|
||||
|
||||
- fix issue 191: add unittest TestCase runTest method support
|
||||
- fix issue 156: monkeypatch correctly handles class level descriptors
|
||||
|
||||
- reporting refinements:
|
||||
|
||||
- pytest_report_header now receives a "startdir" so that
|
||||
you can use startdir.bestrelpath(yourpath) to show
|
||||
nice relative path
|
||||
|
||||
- allow plugins to implement both pytest_report_header and
|
||||
pytest_sessionstart (sessionstart is invoked first).
|
||||
|
||||
- don't show deselected reason line if there is none
|
||||
|
||||
- py.test -vv will show all of assert comparisations instead of truncating
|
||||
|
||||
39
doc/en/announce/release-2.3.1.txt
Normal file
39
doc/en/announce/release-2.3.1.txt
Normal file
@@ -0,0 +1,39 @@
|
||||
pytest-2.3.1: fix regression with factory functions
|
||||
===========================================================================
|
||||
|
||||
pytest-2.3.1 is a quick follow-up release:
|
||||
|
||||
- fix issue202 - regression with fixture functions/funcarg factories:
|
||||
using "self" is now safe again and works as in 2.2.4. Thanks
|
||||
to Eduard Schettino for the quick bug report.
|
||||
|
||||
- disable pexpect pytest self tests on Freebsd - thanks Koob for the
|
||||
quick reporting
|
||||
|
||||
- fix/improve interactive docs with --markers
|
||||
|
||||
See
|
||||
|
||||
http://pytest.org/
|
||||
|
||||
for general information. To install or upgrade pytest:
|
||||
|
||||
pip install -U pytest # or
|
||||
easy_install -U pytest
|
||||
|
||||
best,
|
||||
holger krekel
|
||||
|
||||
|
||||
Changes between 2.3.0 and 2.3.1
|
||||
-----------------------------------
|
||||
|
||||
- fix issue202 - fix regression: using "self" from fixture functions now
|
||||
works as expected (it's the same "self" instance that a test method
|
||||
which uses the fixture sees)
|
||||
|
||||
- skip pexpect using tests (test_pdb.py mostly) on freebsd* systems
|
||||
due to pexpect not supporting it properly (hanging)
|
||||
|
||||
- link to web pages from --markers output which provides help for
|
||||
pytest.mark.* usage.
|
||||
@@ -10,14 +10,15 @@ py.test reference documentation
|
||||
builtin.txt
|
||||
customize.txt
|
||||
assert.txt
|
||||
funcargs.txt
|
||||
fixture.txt
|
||||
parametrize.txt
|
||||
xunit_setup.txt
|
||||
capture.txt
|
||||
monkeypatch.txt
|
||||
xdist.txt
|
||||
tmpdir.txt
|
||||
skipping.txt
|
||||
mark.txt
|
||||
skipping.txt
|
||||
recwarn.txt
|
||||
unittest.txt
|
||||
nose.txt
|
||||
@@ -2,9 +2,12 @@
|
||||
The writing and reporting of assertions in tests
|
||||
==================================================
|
||||
|
||||
.. _`assertfeedback`:
|
||||
.. _`assert with the assert statement`:
|
||||
.. _`assert`:
|
||||
|
||||
assert with the ``assert`` statement
|
||||
|
||||
Asserting with the ``assert`` statement
|
||||
---------------------------------------------------------
|
||||
|
||||
``py.test`` allows you to use the standard python ``assert`` for verifying
|
||||
@@ -22,14 +25,14 @@ to assert that your function returns a certain value. If this assertion fails
|
||||
you will see the return value of the function call::
|
||||
|
||||
$ py.test test_assert1.py
|
||||
============================= test session starts ==============================
|
||||
platform linux2 -- Python 2.6.6 -- pytest-2.1.0.dev6
|
||||
collecting ... collected 1 items
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 1 items
|
||||
|
||||
test_assert1.py F
|
||||
|
||||
=================================== FAILURES ===================================
|
||||
________________________________ test_function _________________________________
|
||||
================================= FAILURES =================================
|
||||
______________________________ test_function _______________________________
|
||||
|
||||
def test_function():
|
||||
> assert f() == 4
|
||||
@@ -37,7 +40,7 @@ you will see the return value of the function call::
|
||||
E + where 3 = f()
|
||||
|
||||
test_assert1.py:5: AssertionError
|
||||
=========================== 1 failed in 0.01 seconds ===========================
|
||||
========================= 1 failed in 0.01 seconds =========================
|
||||
|
||||
py.test has support for showing the values of the most common subexpressions
|
||||
including calls, attributes, comparisons, and binary and unary
|
||||
@@ -54,7 +57,9 @@ will be simply shown in the traceback.
|
||||
|
||||
See :ref:`assert-details` for more information on assertion introspection.
|
||||
|
||||
assertions about expected exceptions
|
||||
.. _`assertraises`:
|
||||
|
||||
Assertions about expected exceptions
|
||||
------------------------------------------
|
||||
|
||||
In order to write assertions about raised exceptions, you can use
|
||||
@@ -73,7 +78,7 @@ and if you need to have access to the actual exception info you may use::
|
||||
|
||||
# do checks related to excinfo.type, excinfo.value, excinfo.traceback
|
||||
|
||||
If you want to write test code that works on Python2.4 as well,
|
||||
If you want to write test code that works on Python 2.4 as well,
|
||||
you may also use two other ways to test for an expected exception::
|
||||
|
||||
pytest.raises(ExpectedException, func, *args, **kwargs)
|
||||
@@ -104,14 +109,14 @@ when it encounters comparisons. For example::
|
||||
if you run this module::
|
||||
|
||||
$ py.test test_assert2.py
|
||||
============================= test session starts ==============================
|
||||
platform linux2 -- Python 2.6.6 -- pytest-2.1.0.dev6
|
||||
collecting ... collected 1 items
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 1 items
|
||||
|
||||
test_assert2.py F
|
||||
|
||||
=================================== FAILURES ===================================
|
||||
_____________________________ test_set_comparison ______________________________
|
||||
================================= FAILURES =================================
|
||||
___________________________ test_set_comparison ____________________________
|
||||
|
||||
def test_set_comparison():
|
||||
set1 = set("1308")
|
||||
@@ -124,7 +129,7 @@ if you run this module::
|
||||
E '5'
|
||||
|
||||
test_assert2.py:5: AssertionError
|
||||
=========================== 1 failed in 0.01 seconds ===========================
|
||||
========================= 1 failed in 0.01 seconds =========================
|
||||
|
||||
Special comparisons are done for a number of cases:
|
||||
|
||||
@@ -168,10 +173,9 @@ you can run the test module and get the custom output defined in
|
||||
the conftest file::
|
||||
|
||||
$ py.test -q test_foocompare.py
|
||||
collecting ... collected 1 items
|
||||
F
|
||||
=================================== FAILURES ===================================
|
||||
_________________________________ test_compare _________________________________
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_compare _______________________________
|
||||
|
||||
def test_compare():
|
||||
f1 = Foo(1)
|
||||
@@ -181,7 +185,6 @@ the conftest file::
|
||||
E vals: 1 != 2
|
||||
|
||||
test_foocompare.py:8: AssertionError
|
||||
1 failed in 0.01 seconds
|
||||
|
||||
.. _assert-details:
|
||||
.. _`assert introspection`:
|
||||
@@ -240,6 +243,8 @@ easy to rewrite the assertion and avoid any trouble::
|
||||
|
||||
All assert introspection can be turned off by passing ``--assert=plain``.
|
||||
|
||||
For further information, Benjamin Peterson wrote up `Behind the scenes of py.test's new assertion rewriting <http://pybites.blogspot.com/2011/07/behind-scenes-of-pytests-new-assertion.html>`_.
|
||||
|
||||
.. versionadded:: 2.1
|
||||
Add assert rewriting as an alternate introspection technique.
|
||||
|
||||
186
doc/en/attic_fixtures.txt
Normal file
186
doc/en/attic_fixtures.txt
Normal file
@@ -0,0 +1,186 @@
|
||||
|
||||
**Test classes, modules or whole projects can make use of
|
||||
one or more fixtures**. All required fixture functions will execute
|
||||
before a test from the specifying context executes. As You can use this
|
||||
to make tests operate from a pre-initialized directory or with
|
||||
certain environment variables or with pre-configured global application
|
||||
settings.
|
||||
|
||||
For example, the Django_ project requires database
|
||||
initialization to be able to import from and use its model objects.
|
||||
For that, the `pytest-django`_ plugin provides fixtures which your
|
||||
project can then easily depend or extend on, simply by referencing the
|
||||
name of the particular fixture.
|
||||
|
||||
Fixture functions have limited visilibity which depends on where they
|
||||
are defined. If they are defined on a test class, only its test methods
|
||||
may use it. A fixture defined in a module can only be used
|
||||
from that test module. A fixture defined in a conftest.py file
|
||||
can only be used by the tests below the directory of that file.
|
||||
Lastly, plugins can define fixtures which are available across all
|
||||
projects.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
Python, Java and many other languages support a so called xUnit_ style
|
||||
for providing a fixed state, `test fixtures`_, for running tests. It
|
||||
typically involves calling a autouse function ahead and a teardown
|
||||
function after test execute. In 2005 pytest introduced a scope-specific
|
||||
model of automatically detecting and calling autouse and teardown
|
||||
functions on a per-module, class or function basis. The Python unittest
|
||||
package and nose have subsequently incorporated them. This model
|
||||
remains supported by pytest as :ref:`classic xunit`.
|
||||
|
||||
One property of xunit fixture functions is that they work implicitely
|
||||
by preparing global state or setting attributes on TestCase objects.
|
||||
By contrast, pytest provides :ref:`funcargs` which allow to
|
||||
dependency-inject application test state into test functions or
|
||||
methods as function arguments. If your application is sufficiently modular
|
||||
or if you are creating a new project, we recommend you now rather head over to
|
||||
:ref:`funcargs` instead because many pytest users agree that using this
|
||||
paradigm leads to better application and test organisation.
|
||||
|
||||
However, not all programs and frameworks work and can be tested in
|
||||
a fully modular way. They rather require preparation of global state
|
||||
like database autouse on which further fixtures like preparing application
|
||||
specific tables or wrapping tests in transactions can take place. For those
|
||||
needs, pytest-2.3 now supports new **fixture functions** which come with
|
||||
a ton of improvements over classic xunit fixture writing. Fixture functions:
|
||||
|
||||
- allow to separate different autouse concerns into multiple modular functions
|
||||
|
||||
- can receive and fully interoperate with :ref:`funcargs <resources>`,
|
||||
|
||||
- are called multiple times if its funcargs are parametrized,
|
||||
|
||||
- don't need to be defined directly in your test classes or modules,
|
||||
they can also be defined in a plugin or :ref:`conftest.py <conftest.py>` files and get called
|
||||
|
||||
- are called on a per-session, per-module, per-class or per-function basis
|
||||
by means of a simple "scope" declaration.
|
||||
|
||||
- can access the :ref:`request <request>` object which allows to
|
||||
introspect and interact with the (scoped) testcontext.
|
||||
|
||||
- can add cleanup functions which will be invoked when the last test
|
||||
of the fixture test context has finished executing.
|
||||
|
||||
All of these features are now demonstrated by little examples.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
test modules accessing a global resource
|
||||
-------------------------------------------------------
|
||||
|
||||
.. note::
|
||||
|
||||
Relying on `global state is considered bad programming practise <http://en.wikipedia.org/wiki/Global_variable>`_ but when you work with an application
|
||||
that relies on it you often have no choice.
|
||||
|
||||
If you want test modules to access a global resource,
|
||||
you can stick the resource to the module globals in
|
||||
a per-module autouse function. We use a :ref:`resource factory
|
||||
<@pytest.fixture>` to create our global resource::
|
||||
|
||||
# content of conftest.py
|
||||
import pytest
|
||||
|
||||
class GlobalResource:
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def globresource():
|
||||
return GlobalResource()
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def setresource(request, globresource):
|
||||
request.module.globresource = globresource
|
||||
|
||||
Now any test module can access ``globresource`` as a module global::
|
||||
|
||||
# content of test_glob.py
|
||||
|
||||
def test_1():
|
||||
print ("test_1 %s" % globresource)
|
||||
def test_2():
|
||||
print ("test_2 %s" % globresource)
|
||||
|
||||
Let's run this module without output-capturing::
|
||||
|
||||
$ py.test -qs test_glob.py
|
||||
FF
|
||||
================================= FAILURES =================================
|
||||
__________________________________ test_1 __________________________________
|
||||
|
||||
def test_1():
|
||||
> print ("test_1 %s" % globresource)
|
||||
E NameError: global name 'globresource' is not defined
|
||||
|
||||
test_glob.py:3: NameError
|
||||
__________________________________ test_2 __________________________________
|
||||
|
||||
def test_2():
|
||||
> print ("test_2 %s" % globresource)
|
||||
E NameError: global name 'globresource' is not defined
|
||||
|
||||
test_glob.py:5: NameError
|
||||
|
||||
The two tests see the same global ``globresource`` object.
|
||||
|
||||
Parametrizing the global resource
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
We extend the previous example and add parametrization to the globresource
|
||||
factory and also add a finalizer::
|
||||
|
||||
# content of conftest.py
|
||||
|
||||
import pytest
|
||||
|
||||
class GlobalResource:
|
||||
def __init__(self, param):
|
||||
self.param = param
|
||||
|
||||
@pytest.fixture(scope="session", params=[1,2])
|
||||
def globresource(request):
|
||||
g = GlobalResource(request.param)
|
||||
def fin():
|
||||
print "finalizing", g
|
||||
request.addfinalizer(fin)
|
||||
return g
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def setresource(request, globresource):
|
||||
request.module.globresource = globresource
|
||||
|
||||
And then re-run our test module::
|
||||
|
||||
$ py.test -qs test_glob.py
|
||||
FF
|
||||
================================= FAILURES =================================
|
||||
__________________________________ test_1 __________________________________
|
||||
|
||||
def test_1():
|
||||
> print ("test_1 %s" % globresource)
|
||||
E NameError: global name 'globresource' is not defined
|
||||
|
||||
test_glob.py:3: NameError
|
||||
__________________________________ test_2 __________________________________
|
||||
|
||||
def test_2():
|
||||
> print ("test_2 %s" % globresource)
|
||||
E NameError: global name 'globresource' is not defined
|
||||
|
||||
test_glob.py:5: NameError
|
||||
|
||||
We are now running the two tests twice with two different global resource
|
||||
instances. Note that the tests are ordered such that only
|
||||
one instance is active at any given time: the finalizer of
|
||||
the first globresource instance is called before the second
|
||||
instance is created and sent to the autouse functions.
|
||||
|
||||
@@ -1,31 +1,78 @@
|
||||
|
||||
.. _`pytest helpers`:
|
||||
|
||||
pytest builtin helpers
|
||||
Pytest API and builtin fixtures
|
||||
================================================
|
||||
|
||||
builtin pytest.* functions and helping objects
|
||||
-----------------------------------------------------
|
||||
This is a list of ``pytest.*`` API functions and fixtures.
|
||||
|
||||
You can always use an interactive Python prompt and type::
|
||||
For information on plugin hooks and objects, see :ref:`plugins`.
|
||||
|
||||
For information on the ``pytest.mark`` mechanism, see :ref:`mark`.
|
||||
|
||||
For the below objects, you can also interactively ask for help, e.g. by
|
||||
typing on the Python interactive prompt something like::
|
||||
|
||||
import pytest
|
||||
help(pytest)
|
||||
|
||||
to get an overview on the globally available helpers.
|
||||
.. currentmodule:: pytest
|
||||
|
||||
.. automodule:: pytest
|
||||
:members:
|
||||
Invoking pytest interactively
|
||||
---------------------------------------------------
|
||||
|
||||
builtin function arguments
|
||||
.. autofunction:: main
|
||||
|
||||
More examples at :ref:`pytest.main-usage`
|
||||
|
||||
|
||||
Helpers for assertions about Exceptions/Warnings
|
||||
--------------------------------------------------------
|
||||
|
||||
.. autofunction:: raises
|
||||
|
||||
Examples at :ref:`assertraises`.
|
||||
|
||||
.. autofunction:: deprecated_call
|
||||
|
||||
Raising a specific test outcome
|
||||
--------------------------------------
|
||||
|
||||
You can use the following functions in your test, fixture or setup
|
||||
functions to force a certain test outcome. Note that most often
|
||||
you can rather use declarative marks, see :ref:`skipping`.
|
||||
|
||||
.. autofunction:: _pytest.runner.fail
|
||||
.. autofunction:: _pytest.runner.skip
|
||||
.. autofunction:: _pytest.runner.importorskip
|
||||
.. autofunction:: _pytest.skipping.xfail
|
||||
.. autofunction:: _pytest.runner.exit
|
||||
|
||||
fixtures and requests
|
||||
-----------------------------------------------------
|
||||
|
||||
You can ask for available builtin or project-custom
|
||||
:ref:`function arguments <funcargs>` by typing::
|
||||
To mark a fixture function:
|
||||
|
||||
$ py.test --funcargs
|
||||
pytestconfig
|
||||
the pytest config object with access to command line opts.
|
||||
.. autofunction:: _pytest.python.fixture
|
||||
|
||||
Tutorial at :ref:`fixtures`.
|
||||
|
||||
The ``request`` object that can be used from fixture functions.
|
||||
|
||||
.. autoclass:: _pytest.python.FixtureRequest()
|
||||
:members:
|
||||
|
||||
|
||||
.. _builtinfixtures:
|
||||
.. _builtinfuncargs:
|
||||
|
||||
Builtin fixtures/function arguments
|
||||
-----------------------------------------
|
||||
|
||||
You can ask for available builtin or project-custom
|
||||
:ref:`fixtures <fixtures>` by typing::
|
||||
|
||||
$ py.test -q --fixtures
|
||||
capsys
|
||||
enables capturing of writes to sys.stdout/sys.stderr and makes
|
||||
captured output available via ``capsys.readouterr()`` method calls
|
||||
@@ -36,13 +83,6 @@ You can ask for available builtin or project-custom
|
||||
captured output available via ``capsys.readouterr()`` method calls
|
||||
which return a ``(out, err)`` tuple.
|
||||
|
||||
tmpdir
|
||||
return a temporary directory path object
|
||||
which is unique to each test function invocation,
|
||||
created as a sub directory of the base temporary
|
||||
directory. The returned object is a `py.path.local`_
|
||||
path object.
|
||||
|
||||
monkeypatch
|
||||
The returned ``monkeypatch`` funcarg provides these
|
||||
helper methods to modify objects, dictionaries or os.environ::
|
||||
@@ -54,12 +94,15 @@ You can ask for available builtin or project-custom
|
||||
monkeypatch.setenv(name, value, prepend=False)
|
||||
monkeypatch.delenv(name, value, raising=True)
|
||||
monkeypatch.syspath_prepend(path)
|
||||
monkeypatch.chdir(path)
|
||||
|
||||
All modifications will be undone after the requesting
|
||||
test function has finished. The ``raising``
|
||||
parameter determines if a KeyError or AttributeError
|
||||
will be raised if the set/deletion operation has no target.
|
||||
|
||||
pytestconfig
|
||||
the pytest config object with access to command line opts.
|
||||
recwarn
|
||||
Return a WarningsRecorder instance that provides these methods:
|
||||
|
||||
@@ -69,3 +112,11 @@ You can ask for available builtin or project-custom
|
||||
See http://docs.python.org/library/warnings.html for information
|
||||
on warning categories.
|
||||
|
||||
tmpdir
|
||||
return a temporary directory path object
|
||||
which is unique to each test function invocation,
|
||||
created as a sub directory of the base temporary
|
||||
directory. The returned object is a `py.path.local`_
|
||||
path object.
|
||||
|
||||
|
||||
@@ -64,8 +64,8 @@ of the failing function and hide the other one::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
|
||||
collecting ... collected 2 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 2 items
|
||||
|
||||
test_module.py .F
|
||||
|
||||
@@ -78,8 +78,8 @@ of the failing function and hide the other one::
|
||||
|
||||
test_module.py:9: AssertionError
|
||||
----------------------------- Captured stdout ------------------------------
|
||||
setting up <function test_func2 at 0x238c410>
|
||||
==================== 1 failed, 1 passed in 0.02 seconds ====================
|
||||
setting up <function test_func2 at 0x1439488>
|
||||
==================== 1 failed, 1 passed in 0.01 seconds ====================
|
||||
|
||||
Accessing captured output from a test function
|
||||
---------------------------------------------------
|
||||
@@ -4,4 +4,4 @@
|
||||
Changelog history
|
||||
=================================
|
||||
|
||||
.. include:: ../CHANGELOG
|
||||
.. include:: ../../CHANGELOG
|
||||
287
doc/en/conf.py
Normal file
287
doc/en/conf.py
Normal file
@@ -0,0 +1,287 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
#
|
||||
# pytest documentation build configuration file, created by
|
||||
# sphinx-quickstart on Fri Oct 8 17:54:28 2010.
|
||||
#
|
||||
# This file is execfile()d with the current directory set to its containing dir.
|
||||
#
|
||||
# Note that not all possible configuration values are present in this
|
||||
# autogenerated file.
|
||||
#
|
||||
# All configuration values have a default; values that are commented out
|
||||
# serve to show the default.
|
||||
|
||||
# The version info for the project you're documenting, acts as replacement for
|
||||
# |version| and |release|, also used in various other places throughout the
|
||||
# built documents.
|
||||
#
|
||||
# The full version, including alpha/beta/rc tags.
|
||||
# The short X.Y version.
|
||||
version = release = "2.4.2"
|
||||
|
||||
import sys, os
|
||||
|
||||
# If extensions (or modules to document with autodoc) are in another directory,
|
||||
# add these directories to sys.path here. If the directory is relative to the
|
||||
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
||||
#sys.path.insert(0, os.path.abspath('.'))
|
||||
|
||||
autodoc_member_order = "bysource"
|
||||
todo_include_todos = 1
|
||||
|
||||
# -- General configuration -----------------------------------------------------
|
||||
|
||||
# If your documentation needs a minimal Sphinx version, state it here.
|
||||
#needs_sphinx = '1.0'
|
||||
|
||||
# Add any Sphinx extension module names here, as strings. They can be extensions
|
||||
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
|
||||
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.autosummary',
|
||||
'sphinx.ext.intersphinx', 'sphinx.ext.viewcode']
|
||||
|
||||
# Add any paths that contain templates here, relative to this directory.
|
||||
templates_path = ['_templates']
|
||||
|
||||
# The suffix of source filenames.
|
||||
source_suffix = '.txt'
|
||||
|
||||
# The encoding of source files.
|
||||
#source_encoding = 'utf-8-sig'
|
||||
|
||||
# The master toctree document.
|
||||
master_doc = 'contents'
|
||||
|
||||
# General information about the project.
|
||||
project = u'pytest'
|
||||
copyright = u'2012, holger krekel'
|
||||
|
||||
|
||||
|
||||
# The language for content autogenerated by Sphinx. Refer to documentation
|
||||
# for a list of supported languages.
|
||||
#language = None
|
||||
|
||||
# There are two options for replacing |today|: either, you set today to some
|
||||
# non-false value, then it is used:
|
||||
#today = ''
|
||||
# Else, today_fmt is used as the format for a strftime call.
|
||||
#today_fmt = '%B %d, %Y'
|
||||
|
||||
# List of patterns, relative to source directory, that match files and
|
||||
# directories to ignore when looking for source files.
|
||||
exclude_patterns = ['links.inc', '_build', 'naming20.txt', 'test/*',
|
||||
"old_*",
|
||||
'*attic*',
|
||||
'*/attic*',
|
||||
'funcargs.txt',
|
||||
'setup.txt',
|
||||
'example/remoteinterp.txt',
|
||||
]
|
||||
|
||||
|
||||
# The reST default role (used for this markup: `text`) to use for all documents.
|
||||
#default_role = None
|
||||
|
||||
# If true, '()' will be appended to :func: etc. cross-reference text.
|
||||
#add_function_parentheses = True
|
||||
|
||||
# If true, the current module name will be prepended to all description
|
||||
# unit titles (such as .. function::).
|
||||
add_module_names = False
|
||||
|
||||
# If true, sectionauthor and moduleauthor directives will be shown in the
|
||||
# output. They are ignored by default.
|
||||
#show_authors = False
|
||||
|
||||
# The name of the Pygments (syntax highlighting) style to use.
|
||||
pygments_style = 'sphinx'
|
||||
|
||||
|
||||
|
||||
# A list of ignored prefixes for module index sorting.
|
||||
#modindex_common_prefix = []
|
||||
|
||||
|
||||
# -- Options for HTML output ---------------------------------------------------
|
||||
|
||||
# The theme to use for HTML and HTML Help pages. See the documentation for
|
||||
# a list of builtin themes.
|
||||
html_theme = 'sphinxdoc'
|
||||
|
||||
# Theme options are theme-specific and customize the look and feel of a theme
|
||||
# further. For a list of options available for each theme, see the
|
||||
# documentation.
|
||||
html_theme_options = {}
|
||||
|
||||
# Add any paths that contain custom themes here, relative to this directory.
|
||||
#html_theme_path = []
|
||||
|
||||
# The name for this set of Sphinx documents. If None, it defaults to
|
||||
# "<project> v<release> documentation".
|
||||
html_title = None
|
||||
|
||||
# A shorter title for the navigation bar. Default is the same as html_title.
|
||||
html_short_title = "pytest-%s" % release
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top
|
||||
# of the sidebar.
|
||||
#html_logo = None
|
||||
|
||||
# The name of an image file (within the static path) to use as favicon of the
|
||||
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
|
||||
# pixels large.
|
||||
#html_favicon = None
|
||||
|
||||
# Add any paths that contain custom static files (such as style sheets) here,
|
||||
# relative to this directory. They are copied after the builtin static files,
|
||||
# so a file named "default.css" will overwrite the builtin "default.css".
|
||||
html_static_path = ['_static']
|
||||
|
||||
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
|
||||
# using the given strftime format.
|
||||
#html_last_updated_fmt = '%b %d, %Y'
|
||||
|
||||
# If true, SmartyPants will be used to convert quotes and dashes to
|
||||
# typographically correct entities.
|
||||
#html_use_smartypants = True
|
||||
|
||||
# Custom sidebar templates, maps document names to template names.
|
||||
#html_sidebars = {}
|
||||
#html_sidebars = {'index': 'indexsidebar.html'}
|
||||
|
||||
# Additional templates that should be rendered to pages, maps page names to
|
||||
# template names.
|
||||
#html_additional_pages = {}
|
||||
#html_additional_pages = {'index': 'index.html'}
|
||||
|
||||
|
||||
# If false, no module index is generated.
|
||||
html_domain_indices = True
|
||||
|
||||
# If false, no index is generated.
|
||||
html_use_index = False
|
||||
|
||||
# If true, the index is split into individual pages for each letter.
|
||||
#html_split_index = False
|
||||
|
||||
# If true, links to the reST sources are added to the pages.
|
||||
html_show_sourcelink = False
|
||||
|
||||
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
|
||||
#html_show_sphinx = True
|
||||
|
||||
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
|
||||
#html_show_copyright = True
|
||||
|
||||
# If true, an OpenSearch description file will be output, and all pages will
|
||||
# contain a <link> tag referring to it. The value of this option must be the
|
||||
# base URL from which the finished HTML is served.
|
||||
#html_use_opensearch = ''
|
||||
|
||||
# This is the file name suffix for HTML files (e.g. ".xhtml").
|
||||
#html_file_suffix = None
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
htmlhelp_basename = 'pytestdoc'
|
||||
|
||||
|
||||
# -- Options for LaTeX output --------------------------------------------------
|
||||
|
||||
# The paper size ('letter' or 'a4').
|
||||
#latex_paper_size = 'letter'
|
||||
|
||||
# The font size ('10pt', '11pt' or '12pt').
|
||||
#latex_font_size = '10pt'
|
||||
|
||||
# Grouping the document tree into LaTeX files. List of tuples
|
||||
# (source start file, target name, title, author, documentclass [howto/manual]).
|
||||
latex_documents = [
|
||||
('contents', 'pytest.tex', u'pytest Documentation',
|
||||
u'holger krekel, http://merlinux.eu', 'manual'),
|
||||
]
|
||||
|
||||
# The name of an image file (relative to this directory) to place at the top of
|
||||
# the title page.
|
||||
#latex_logo = None
|
||||
|
||||
# For "manual" documents, if this is true, then toplevel headings are parts,
|
||||
# not chapters.
|
||||
#latex_use_parts = False
|
||||
|
||||
# If true, show page references after internal links.
|
||||
#latex_show_pagerefs = False
|
||||
|
||||
# If true, show URL addresses after external links.
|
||||
#latex_show_urls = False
|
||||
|
||||
# Additional stuff for the LaTeX preamble.
|
||||
#latex_preamble = ''
|
||||
|
||||
# Documents to append as an appendix to all manuals.
|
||||
#latex_appendices = []
|
||||
|
||||
# If false, no module index is generated.
|
||||
latex_domain_indices = False
|
||||
|
||||
# -- Options for manual page output --------------------------------------------
|
||||
|
||||
# One entry per manual page. List of tuples
|
||||
# (source start file, name, description, authors, manual section).
|
||||
man_pages = [
|
||||
('usage', 'pytest', u'pytest usage',
|
||||
[u'holger krekel at merlinux eu'], 1)
|
||||
]
|
||||
|
||||
|
||||
# -- Options for Epub output ---------------------------------------------------
|
||||
|
||||
# Bibliographic Dublin Core info.
|
||||
epub_title = u'pytest'
|
||||
epub_author = u'holger krekel at merlinux eu'
|
||||
epub_publisher = u'holger krekel at merlinux eu'
|
||||
epub_copyright = u'2012, holger krekel et alii'
|
||||
|
||||
# The language of the text. It defaults to the language option
|
||||
# or en if the language is not set.
|
||||
#epub_language = ''
|
||||
|
||||
# The scheme of the identifier. Typical schemes are ISBN or URL.
|
||||
#epub_scheme = ''
|
||||
|
||||
# The unique identifier of the text. This can be a ISBN number
|
||||
# or the project homepage.
|
||||
#epub_identifier = ''
|
||||
|
||||
# A unique identification for the text.
|
||||
#epub_uid = ''
|
||||
|
||||
# HTML files that should be inserted before the pages created by sphinx.
|
||||
# The format is a list of tuples containing the path and title.
|
||||
#epub_pre_files = []
|
||||
|
||||
# HTML files shat should be inserted after the pages created by sphinx.
|
||||
# The format is a list of tuples containing the path and title.
|
||||
#epub_post_files = []
|
||||
|
||||
# A list of files that should not be packed into the epub file.
|
||||
#epub_exclude_files = []
|
||||
|
||||
# The depth of the table of contents in toc.ncx.
|
||||
#epub_tocdepth = 3
|
||||
|
||||
# Allow duplicate toc entries.
|
||||
#epub_tocdup = True
|
||||
|
||||
|
||||
# Example configuration for intersphinx: refer to the Python standard library.
|
||||
intersphinx_mapping = {'python': ('http://docs.python.org/', None),
|
||||
# 'lib': ("http://docs.python.org/2.7library/", None),
|
||||
}
|
||||
|
||||
|
||||
def setup(app):
|
||||
#from sphinx.ext.autodoc import cut_lines
|
||||
#app.connect('autodoc-process-docstring', cut_lines(4, what=['module']))
|
||||
app.add_description_unit('confval', 'confval',
|
||||
objname='configuration value',
|
||||
indextemplate='pair: %s; configuration value')
|
||||
@@ -9,6 +9,10 @@ Contact channels
|
||||
2.0 and above). You may also peek at the `old issue tracker`_ but please
|
||||
don't submit bugs there anymore.
|
||||
|
||||
- `pytest on stackoverflow.com <http://stackoverflow.com/search?q=pytest>`_
|
||||
to post questions with the tag ``pytest``. New Questions will usually
|
||||
be seen by pytest users or developers.
|
||||
|
||||
- `Testing In Python`_: a mailing list for Python testing tools and discussion.
|
||||
|
||||
- `py-dev developers list`_ pytest specific announcements and discussions.
|
||||
@@ -1,10 +1,10 @@
|
||||
|
||||
.. _toc:
|
||||
|
||||
Full pytest documenation
|
||||
========================
|
||||
Full pytest documentation
|
||||
===========================
|
||||
|
||||
`Download latest version as PDF <http://media.readthedocs.org/pdf/pytest/latest/pytest.pdf>`_
|
||||
`Download latest version as PDF <pytest.pdf>`_
|
||||
|
||||
.. `Download latest version as EPUB <http://media.readthedocs.org/epub/pytest/latest/pytest.epub>`_
|
||||
|
||||
@@ -12,17 +12,16 @@ Full pytest documenation
|
||||
:maxdepth: 2
|
||||
|
||||
overview
|
||||
example/index
|
||||
apiref
|
||||
plugins
|
||||
example/index
|
||||
talks
|
||||
develop
|
||||
funcarg_compare.txt
|
||||
announce/index
|
||||
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
changelog.txt
|
||||
naming20.txt
|
||||
example/attic
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
basic test configuration
|
||||
Basic test configuration
|
||||
===================================
|
||||
|
||||
Command line options and configuration file settings
|
||||
@@ -23,8 +23,9 @@ It looks for file basenames in this order::
|
||||
tox.ini
|
||||
setup.cfg
|
||||
|
||||
Searching stops when the first ``[pytest]`` section is found.
|
||||
There is no merging of configuration values from multiple files. Example::
|
||||
Searching stops when the first ``[pytest]`` section is found in any of
|
||||
these files. There is no merging of configuration values from multiple
|
||||
files. Example::
|
||||
|
||||
py.test path/to/testdir
|
||||
|
||||
@@ -59,18 +60,18 @@ progress output, you can write it into a configuration file::
|
||||
|
||||
From now on, running ``py.test`` will add the specified options.
|
||||
|
||||
builtin configuration file options
|
||||
Builtin configuration file options
|
||||
----------------------------------------------
|
||||
|
||||
.. confval:: minversion
|
||||
|
||||
specifies a minimal pytest version required for running tests.
|
||||
Specifies a minimal pytest version required for running tests.
|
||||
|
||||
minversion = 2.1 # will fail if we run with pytest-2.0
|
||||
|
||||
.. confval:: addopts
|
||||
|
||||
add the specified ``OPTS`` to the set of command line arguments as if they
|
||||
Add the specified ``OPTS`` to the set of command line arguments as if they
|
||||
had been specified by the user. Example: if you have this ini file content::
|
||||
|
||||
[pytest]
|
||||
@@ -94,7 +95,7 @@ builtin configuration file options
|
||||
[seq] matches any character in seq
|
||||
[!seq] matches any char not in seq
|
||||
|
||||
Default patterns are ``.* _* CVS {args}``. Setting a ``norecurse``
|
||||
Default patterns are ``.* _* CVS {args}``. Setting a ``norecursedir``
|
||||
replaces the default. Here is an example of how to avoid
|
||||
certain directories::
|
||||
|
||||
@@ -121,4 +122,3 @@ builtin configuration file options
|
||||
and methods are considered as test modules.
|
||||
|
||||
See :ref:`change naming conventions` for examples.
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
|
||||
doctest integration for modules and test files
|
||||
Doctest integration for modules and test files
|
||||
=========================================================
|
||||
|
||||
By default all files matching the ``test*.txt`` pattern will
|
||||
@@ -44,9 +44,9 @@ then you can just invoke ``py.test`` without command line options::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
|
||||
collecting ... collected 1 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 1 items
|
||||
|
||||
mymodule.py .
|
||||
|
||||
========================= 1 passed in 0.40 seconds =========================
|
||||
========================= 1 passed in 0.02 seconds =========================
|
||||
@@ -15,7 +15,7 @@ def test_generative(param1, param2):
|
||||
assert param1 * 2 < param2
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
if 'param1' in metafunc.funcargnames:
|
||||
if 'param1' in metafunc.fixturenames:
|
||||
metafunc.addcall(funcargs=dict(param1=3, param2=6))
|
||||
|
||||
class TestFailing(object):
|
||||
@@ -13,9 +13,9 @@ example: specifying and selecting acceptance tests
|
||||
help="run (slow) acceptance tests")
|
||||
|
||||
def pytest_funcarg__accept(request):
|
||||
return AcceptFuncarg(request)
|
||||
return AcceptFixture(request)
|
||||
|
||||
class AcceptFuncarg:
|
||||
class AcceptFixture:
|
||||
def __init__(self, request):
|
||||
if not request.config.option.acceptance:
|
||||
pytest.skip("specify -A to run acceptance tests")
|
||||
@@ -39,7 +39,7 @@ and the actual test function example:
|
||||
If you run this test without specifying a command line option
|
||||
the test will get skipped with an appropriate message. Otherwise
|
||||
you can start to add convenience and test support methods
|
||||
to your AcceptFuncarg and drive running of tools or
|
||||
to your AcceptFixture and drive running of tools or
|
||||
applications and provide ways to do assertions about
|
||||
the output.
|
||||
|
||||
18
doc/en/example/costlysetup/conftest.py
Normal file
18
doc/en/example/costlysetup/conftest.py
Normal file
@@ -0,0 +1,18 @@
|
||||
|
||||
import pytest
|
||||
|
||||
@pytest.fixture("session")
|
||||
def setup(request):
|
||||
setup = CostlySetup()
|
||||
request.addfinalizer(setup.finalize)
|
||||
return setup
|
||||
|
||||
class CostlySetup:
|
||||
def __init__(self):
|
||||
import time
|
||||
print ("performing costly setup")
|
||||
time.sleep(5)
|
||||
self.timecostly = 1
|
||||
|
||||
def finalize(self):
|
||||
del self.timecostly
|
||||
33
doc/en/example/index.txt
Normal file
33
doc/en/example/index.txt
Normal file
@@ -0,0 +1,33 @@
|
||||
|
||||
.. _examples:
|
||||
|
||||
Usages and Examples
|
||||
===========================================
|
||||
|
||||
Here is a (growing) list of examples. :ref:`Contact <contact>` us if you
|
||||
need more examples or have questions. Also take a look at the
|
||||
:ref:`comprehensive documentation <toc>` which contains many example
|
||||
snippets as well. Also, `pytest on stackoverflow.com
|
||||
<http://stackoverflow.com/search?q=pytest>`_ often comes with example
|
||||
answers.
|
||||
|
||||
For basic examples, see
|
||||
|
||||
- :doc:`../getting-started` for basic introductory examples
|
||||
- :ref:`assert` for basic assertion examples
|
||||
- :ref:`fixtures` for basic fixture/setup examples
|
||||
- :ref:`parametrize` for basic test function parametrization
|
||||
- :doc:`../unittest` for basic unittest integration
|
||||
- :doc:`../nose` for basic nosetests integration
|
||||
|
||||
The following examples aim at various use cases you might encounter.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
reportingdemo.txt
|
||||
simple.txt
|
||||
parametrize.txt
|
||||
markers.txt
|
||||
pythoncollection.txt
|
||||
nonpython.txt
|
||||
375
doc/en/example/markers.txt
Normal file
375
doc/en/example/markers.txt
Normal file
@@ -0,0 +1,375 @@
|
||||
|
||||
.. _`mark examples`:
|
||||
|
||||
Working with custom markers
|
||||
=================================================
|
||||
|
||||
Here are some example using the :ref:`mark` mechanism.
|
||||
|
||||
Marking test functions and selecting them for a run
|
||||
----------------------------------------------------
|
||||
|
||||
You can "mark" a test function with custom metadata like this::
|
||||
|
||||
# content of test_server.py
|
||||
|
||||
import pytest
|
||||
@pytest.mark.webtest
|
||||
def test_send_http():
|
||||
pass # perform some webtest test for your app
|
||||
def test_something_quick():
|
||||
pass
|
||||
|
||||
.. versionadded:: 2.2
|
||||
|
||||
You can then restrict a test run to only run tests marked with ``webtest``::
|
||||
|
||||
$ py.test -v -m webtest
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_server.py:3: test_send_http PASSED
|
||||
|
||||
=================== 1 tests deselected by "-m 'webtest'" ===================
|
||||
================== 1 passed, 1 deselected in 0.01 seconds ==================
|
||||
|
||||
Or the inverse, running all tests except the webtest ones::
|
||||
|
||||
$ py.test -v -m "not webtest"
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_server.py:6: test_something_quick PASSED
|
||||
|
||||
================= 1 tests deselected by "-m 'not webtest'" =================
|
||||
================== 1 passed, 1 deselected in 0.01 seconds ==================
|
||||
|
||||
Registering markers
|
||||
-------------------------------------
|
||||
|
||||
.. versionadded:: 2.2
|
||||
|
||||
.. ini-syntax for custom markers:
|
||||
|
||||
Registering markers for your test suite is simple::
|
||||
|
||||
# content of pytest.ini
|
||||
[pytest]
|
||||
markers =
|
||||
webtest: mark a test as a webtest.
|
||||
|
||||
You can ask which markers exist for your test suite - the list includes our just defined ``webtest`` markers::
|
||||
|
||||
$ py.test --markers
|
||||
@pytest.mark.webtest: mark a test as a webtest.
|
||||
|
||||
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
|
||||
|
||||
@pytest.mark.xfail(condition, reason=None, run=True): mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. See http://pytest.org/latest/skipping.html
|
||||
|
||||
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in multiple different argument value sets. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2. see http://pytest.org/latest/parametrize.html for more info and examples.
|
||||
|
||||
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
|
||||
|
||||
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
|
||||
|
||||
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
|
||||
|
||||
|
||||
For an example on how to add and work with markers from a plugin, see
|
||||
:ref:`adding a custom marker from a plugin`.
|
||||
|
||||
.. note::
|
||||
|
||||
It is recommended to explicitely register markers so that:
|
||||
|
||||
* there is one place in your test suite defining your markers
|
||||
|
||||
* asking for existing markers via ``py.test --markers`` gives good output
|
||||
|
||||
* typos in function markers are treated as an error if you use
|
||||
the ``--strict`` option. Later versions of py.test are probably
|
||||
going to treat non-registered markers as an error.
|
||||
|
||||
.. _`scoped-marking`:
|
||||
|
||||
Marking whole classes or modules
|
||||
----------------------------------------------------
|
||||
|
||||
If you are programming with Python 2.6 or later you may use ``pytest.mark``
|
||||
decorators with classes to apply markers to all of its test methods::
|
||||
|
||||
# content of test_mark_classlevel.py
|
||||
import pytest
|
||||
@pytest.mark.webtest
|
||||
class TestClass:
|
||||
def test_startup(self):
|
||||
pass
|
||||
def test_startup_and_more(self):
|
||||
pass
|
||||
|
||||
This is equivalent to directly applying the decorator to the
|
||||
two test functions.
|
||||
|
||||
To remain backward-compatible with Python 2.4 you can also set a
|
||||
``pytestmark`` attribute on a TestClass like this::
|
||||
|
||||
import pytest
|
||||
|
||||
class TestClass:
|
||||
pytestmark = pytest.mark.webtest
|
||||
|
||||
or if you need to use multiple markers you can use a list::
|
||||
|
||||
import pytest
|
||||
|
||||
class TestClass:
|
||||
pytestmark = [pytest.mark.webtest, pytest.mark.slowtest]
|
||||
|
||||
You can also set a module level marker::
|
||||
|
||||
import pytest
|
||||
pytestmark = pytest.mark.webtest
|
||||
|
||||
in which case it will be applied to all functions and
|
||||
methods defined in the module.
|
||||
|
||||
|
||||
Using ``-k TEXT`` to select tests
|
||||
----------------------------------------------------
|
||||
|
||||
You can use the ``-k`` command line option to only run tests with names matching
|
||||
the given argument::
|
||||
|
||||
$ py.test -k send_http # running with the above defined examples
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 4 items
|
||||
|
||||
test_server.py .
|
||||
|
||||
=================== 3 tests deselected by '-ksend_http' ====================
|
||||
================== 1 passed, 3 deselected in 0.01 seconds ==================
|
||||
|
||||
And you can also run all tests except the ones that match the keyword::
|
||||
|
||||
$ py.test -k-send_http
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 4 items
|
||||
|
||||
test_mark_classlevel.py ..
|
||||
test_server.py .
|
||||
|
||||
=================== 1 tests deselected by '-k-send_http' ===================
|
||||
================== 3 passed, 1 deselected in 0.01 seconds ==================
|
||||
|
||||
Or to only select the class::
|
||||
|
||||
$ py.test -kTestClass
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 4 items
|
||||
|
||||
test_mark_classlevel.py ..
|
||||
|
||||
=================== 2 tests deselected by '-kTestClass' ====================
|
||||
================== 2 passed, 2 deselected in 0.01 seconds ==================
|
||||
|
||||
.. _`adding a custom marker from a plugin`:
|
||||
|
||||
Custom marker and command line option to control test runs
|
||||
----------------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
Plugins can provide custom markers and implement specific behaviour
|
||||
based on it. This is a self-contained example which adds a command
|
||||
line option and a parametrized test function marker to run tests
|
||||
specifies via named environments::
|
||||
|
||||
# content of conftest.py
|
||||
|
||||
import pytest
|
||||
def pytest_addoption(parser):
|
||||
parser.addoption("-E", dest="env", action="store", metavar="NAME",
|
||||
help="only run tests matching the environment NAME.")
|
||||
|
||||
def pytest_configure(config):
|
||||
# register an additional marker
|
||||
config.addinivalue_line("markers",
|
||||
"env(name): mark test to run only on named environment")
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
envmarker = item.keywords.get("env", None)
|
||||
if envmarker is not None:
|
||||
envname = envmarker.args[0]
|
||||
if envname != item.config.option.env:
|
||||
pytest.skip("test requires env %r" % envname)
|
||||
|
||||
A test file using this local plugin::
|
||||
|
||||
# content of test_someenv.py
|
||||
|
||||
import pytest
|
||||
@pytest.mark.env("stage1")
|
||||
def test_basic_db_operation():
|
||||
pass
|
||||
|
||||
and an example invocations specifying a different environment than what
|
||||
the test needs::
|
||||
|
||||
$ py.test -E stage2
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 1 items
|
||||
|
||||
test_someenv.py s
|
||||
|
||||
======================== 1 skipped in 0.01 seconds =========================
|
||||
|
||||
and here is one that specifies exactly the environment needed::
|
||||
|
||||
$ py.test -E stage1
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 1 items
|
||||
|
||||
test_someenv.py .
|
||||
|
||||
========================= 1 passed in 0.01 seconds =========================
|
||||
|
||||
The ``--markers`` option always gives you a list of available markers::
|
||||
|
||||
$ py.test --markers
|
||||
@pytest.mark.env(name): mark test to run only on named environment
|
||||
|
||||
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
|
||||
|
||||
@pytest.mark.xfail(condition, reason=None, run=True): mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. See http://pytest.org/latest/skipping.html
|
||||
|
||||
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in multiple different argument value sets. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2. see http://pytest.org/latest/parametrize.html for more info and examples.
|
||||
|
||||
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
|
||||
|
||||
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
|
||||
|
||||
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
|
||||
|
||||
|
||||
Reading markers which were set from multiple places
|
||||
----------------------------------------------------
|
||||
|
||||
.. versionadded: 2.2.2
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
If you are heavily using markers in your test suite you may encounter the case where a marker is applied several times to a test function. From plugin
|
||||
code you can read over all such settings. Example::
|
||||
|
||||
# content of test_mark_three_times.py
|
||||
import pytest
|
||||
pytestmark = pytest.mark.glob("module", x=1)
|
||||
|
||||
@pytest.mark.glob("class", x=2)
|
||||
class TestClass:
|
||||
@pytest.mark.glob("function", x=3)
|
||||
def test_something(self):
|
||||
pass
|
||||
|
||||
Here we have the marker "glob" applied three times to the same
|
||||
test function. From a conftest file we can read it like this::
|
||||
|
||||
# content of conftest.py
|
||||
import sys
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
g = item.keywords.get("glob", None)
|
||||
if g is not None:
|
||||
for info in g:
|
||||
print ("glob args=%s kwargs=%s" %(info.args, info.kwargs))
|
||||
sys.stdout.flush()
|
||||
|
||||
Let's run this without capturing output and see what we get::
|
||||
|
||||
$ py.test -q -s
|
||||
glob args=('function',) kwargs={'x': 3}
|
||||
glob args=('class',) kwargs={'x': 2}
|
||||
glob args=('module',) kwargs={'x': 1}
|
||||
.
|
||||
|
||||
marking platform specific tests with pytest
|
||||
--------------------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
Consider you have a test suite which marks tests for particular platforms,
|
||||
namely ``pytest.mark.osx``, ``pytest.mark.win32`` etc. and you
|
||||
also have tests that run on all platforms and have no specific
|
||||
marker. If you now want to have a way to only run the tests
|
||||
for your particular platform, you could use the following plugin::
|
||||
|
||||
# content of conftest.py
|
||||
#
|
||||
import sys
|
||||
import pytest
|
||||
|
||||
ALL = set("osx linux2 win32".split())
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
if isinstance(item, item.Function):
|
||||
plat = sys.platform
|
||||
if plat not in item.keywords:
|
||||
if ALL.intersection(item.keywords):
|
||||
pytest.skip("cannot run on platform %s" %(plat))
|
||||
|
||||
then tests will be skipped if they were specified for a different platform.
|
||||
Let's do a little test file to show how this looks like::
|
||||
|
||||
# content of test_plat.py
|
||||
|
||||
import pytest
|
||||
|
||||
@pytest.mark.osx
|
||||
def test_if_apple_is_evil():
|
||||
pass
|
||||
|
||||
@pytest.mark.linux2
|
||||
def test_if_linux_works():
|
||||
pass
|
||||
|
||||
@pytest.mark.win32
|
||||
def test_if_win32_crashes():
|
||||
pass
|
||||
|
||||
def test_runs_everywhere():
|
||||
pass
|
||||
|
||||
then you will see two test skipped and two executed tests as expected::
|
||||
|
||||
$ py.test -rs # this option reports skip reasons
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 4 items
|
||||
|
||||
test_plat.py s.s.
|
||||
========================= short test summary info ==========================
|
||||
SKIP [2] /tmp/doc-exec-99/conftest.py:12: cannot run on platform linux2
|
||||
|
||||
=================== 2 passed, 2 skipped in 0.01 seconds ====================
|
||||
|
||||
Note that if you specify a platform via the marker-command line option like this::
|
||||
|
||||
$ py.test -m linux2
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 4 items
|
||||
|
||||
test_plat.py .
|
||||
|
||||
=================== 3 tests deselected by "-m 'linux2'" ====================
|
||||
================== 1 passed, 3 deselected in 0.01 seconds ==================
|
||||
|
||||
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
|
||||
50
doc/en/example/multipython.py
Normal file
50
doc/en/example/multipython.py
Normal file
@@ -0,0 +1,50 @@
|
||||
"""
|
||||
module containing a parametrized tests testing cross-python
|
||||
serialization via the pickle module.
|
||||
"""
|
||||
import py, pytest
|
||||
|
||||
pythonlist = ['python2.4', 'python2.5', 'python2.6', 'python2.7', 'python2.8']
|
||||
@pytest.fixture(params=pythonlist)
|
||||
def python1(request, tmpdir):
|
||||
picklefile = tmpdir.join("data.pickle")
|
||||
return Python(request.param, picklefile)
|
||||
|
||||
@pytest.fixture(params=pythonlist)
|
||||
def python2(request, python1):
|
||||
return Python(request.param, python1.picklefile)
|
||||
|
||||
class Python:
|
||||
def __init__(self, version, picklefile):
|
||||
self.pythonpath = py.path.local.sysfind(version)
|
||||
if not self.pythonpath:
|
||||
py.test.skip("%r not found" %(version,))
|
||||
self.picklefile = picklefile
|
||||
def dumps(self, obj):
|
||||
dumpfile = self.picklefile.dirpath("dump.py")
|
||||
dumpfile.write(py.code.Source("""
|
||||
import pickle
|
||||
f = open(%r, 'wb')
|
||||
s = pickle.dump(%r, f)
|
||||
f.close()
|
||||
""" % (str(self.picklefile), obj)))
|
||||
py.process.cmdexec("%s %s" %(self.pythonpath, dumpfile))
|
||||
|
||||
def load_and_is_true(self, expression):
|
||||
loadfile = self.picklefile.dirpath("load.py")
|
||||
loadfile.write(py.code.Source("""
|
||||
import pickle
|
||||
f = open(%r, 'rb')
|
||||
obj = pickle.load(f)
|
||||
f.close()
|
||||
res = eval(%r)
|
||||
if not res:
|
||||
raise SystemExit(1)
|
||||
""" % (str(self.picklefile), expression)))
|
||||
print (loadfile)
|
||||
py.process.cmdexec("%s %s" %(self.pythonpath, loadfile))
|
||||
|
||||
@pytest.mark.parametrize("obj", [42, {}, {1:3},])
|
||||
def test_basic_objects(python1, python2, obj):
|
||||
python1.dumps(obj)
|
||||
python2.load_and_is_true("obj == %s" % obj)
|
||||
@@ -6,7 +6,7 @@ Working with non-python tests
|
||||
|
||||
.. _`yaml plugin`:
|
||||
|
||||
a basic example for specifying tests in Yaml files
|
||||
A basic example for specifying tests in Yaml files
|
||||
--------------------------------------------------------------
|
||||
|
||||
.. _`pytest-yamlwsgi`: http://bitbucket.org/aafshar/pytest-yamlwsgi/src/tip/pytest_yamlwsgi.py
|
||||
@@ -27,8 +27,8 @@ now execute the test specification::
|
||||
|
||||
nonpython $ py.test test_simple.yml
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
|
||||
collecting ... collected 2 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 2 items
|
||||
|
||||
test_simple.yml .F
|
||||
|
||||
@@ -37,7 +37,7 @@ now execute the test specification::
|
||||
usecase execution failed
|
||||
spec failed: 'some': 'other'
|
||||
no further details known at this point.
|
||||
==================== 1 failed, 1 passed in 0.24 seconds ====================
|
||||
==================== 1 failed, 1 passed in 0.03 seconds ====================
|
||||
|
||||
You get one dot for the passing ``sub1: sub1`` check and one failure.
|
||||
Obviously in the above ``conftest.py`` you'll want to implement a more
|
||||
@@ -51,12 +51,12 @@ your own domain specific testing language this way.
|
||||
representation string of your choice. It
|
||||
will be reported as a (red) string.
|
||||
|
||||
``reportinfo()`` is used for representing the test location and is also consulted for
|
||||
reporting in ``verbose`` mode::
|
||||
``reportinfo()`` is used for representing the test location and is also
|
||||
consulted when reporting in ``verbose`` mode::
|
||||
|
||||
nonpython $ py.test -v
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.6 -- pytest-2.0.3 -- /home/hpk/venv/0/bin/python
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_simple.yml:1: usecase: ok PASSED
|
||||
@@ -67,17 +67,17 @@ reporting in ``verbose`` mode::
|
||||
usecase execution failed
|
||||
spec failed: 'some': 'other'
|
||||
no further details known at this point.
|
||||
==================== 1 failed, 1 passed in 0.07 seconds ====================
|
||||
==================== 1 failed, 1 passed in 0.03 seconds ====================
|
||||
|
||||
While developing your custom test collection and execution it's also
|
||||
interesting to just look at the collection tree::
|
||||
|
||||
nonpython $ py.test --collectonly
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
|
||||
collecting ... collected 2 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 2 items
|
||||
<YamlFile 'test_simple.yml'>
|
||||
<YamlItem 'ok'>
|
||||
<YamlItem 'hello'>
|
||||
|
||||
============================= in 0.07 seconds =============================
|
||||
============================= in 0.02 seconds =============================
|
||||
280
doc/en/example/parametrize.txt
Normal file
280
doc/en/example/parametrize.txt
Normal file
@@ -0,0 +1,280 @@
|
||||
|
||||
.. _paramexamples:
|
||||
|
||||
Parametrizing tests
|
||||
=================================================
|
||||
|
||||
.. currentmodule:: _pytest.python
|
||||
|
||||
py.test allows to easily parametrize test functions.
|
||||
For basic docs, see :ref:`parametrize-basics`.
|
||||
|
||||
In the following we provide some examples using
|
||||
the builtin mechanisms.
|
||||
|
||||
Generating parameters combinations, depending on command line
|
||||
----------------------------------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
Let's say we want to execute a test with different computation
|
||||
parameters and the parameter range shall be determined by a command
|
||||
line argument. Let's first write a simple (do-nothing) computation test::
|
||||
|
||||
# content of test_compute.py
|
||||
|
||||
def test_compute(param1):
|
||||
assert param1 < 4
|
||||
|
||||
Now we add a test configuration like this::
|
||||
|
||||
# content of conftest.py
|
||||
|
||||
def pytest_addoption(parser):
|
||||
parser.addoption("--all", action="store_true",
|
||||
help="run all combinations")
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
if 'param1' in metafunc.fixturenames:
|
||||
if metafunc.config.option.all:
|
||||
end = 5
|
||||
else:
|
||||
end = 2
|
||||
metafunc.parametrize("param1", range(end))
|
||||
|
||||
This means that we only run 2 tests if we do not pass ``--all``::
|
||||
|
||||
$ py.test -q test_compute.py
|
||||
..
|
||||
|
||||
We run only two computations, so we see two dots.
|
||||
let's run the full monty::
|
||||
|
||||
$ py.test -q --all
|
||||
....F
|
||||
================================= FAILURES =================================
|
||||
_____________________________ test_compute[4] ______________________________
|
||||
|
||||
param1 = 4
|
||||
|
||||
def test_compute(param1):
|
||||
> assert param1 < 4
|
||||
E assert 4 < 4
|
||||
|
||||
test_compute.py:3: AssertionError
|
||||
|
||||
As expected when running the full range of ``param1`` values
|
||||
we'll get an error on the last one.
|
||||
|
||||
A quick port of "testscenarios"
|
||||
------------------------------------
|
||||
|
||||
.. _`test scenarios`: http://pypi.python.org/pypi/testscenarios/
|
||||
|
||||
Here is a quick port to run tests configured with `test scenarios`_,
|
||||
an add-on from Robert Collins for the standard unittest framework. We
|
||||
only have to work a bit to construct the correct arguments for pytest's
|
||||
:py:func:`Metafunc.parametrize`::
|
||||
|
||||
# content of test_scenarios.py
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
idlist = []
|
||||
argvalues = []
|
||||
for scenario in metafunc.cls.scenarios:
|
||||
idlist.append(scenario[0])
|
||||
items = scenario[1].items()
|
||||
argnames = [x[0] for x in items]
|
||||
argvalues.append(([x[1] for x in items]))
|
||||
metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class")
|
||||
|
||||
scenario1 = ('basic', {'attribute': 'value'})
|
||||
scenario2 = ('advanced', {'attribute': 'value2'})
|
||||
|
||||
class TestSampleWithScenarios:
|
||||
scenarios = [scenario1, scenario2]
|
||||
|
||||
def test_demo1(self, attribute):
|
||||
assert isinstance(attribute, str)
|
||||
|
||||
def test_demo2(self, attribute):
|
||||
assert isinstance(attribute, str)
|
||||
|
||||
this is a fully self-contained example which you can run with::
|
||||
|
||||
$ py.test test_scenarios.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 4 items
|
||||
|
||||
test_scenarios.py ....
|
||||
|
||||
========================= 4 passed in 0.01 seconds =========================
|
||||
|
||||
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
|
||||
|
||||
|
||||
$ py.test --collectonly test_scenarios.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 4 items
|
||||
<Module 'test_scenarios.py'>
|
||||
<Class 'TestSampleWithScenarios'>
|
||||
<Instance '()'>
|
||||
<Function 'test_demo1[basic]'>
|
||||
<Function 'test_demo2[basic]'>
|
||||
<Function 'test_demo1[advanced]'>
|
||||
<Function 'test_demo2[advanced]'>
|
||||
|
||||
============================= in 0.01 seconds =============================
|
||||
|
||||
Note that we told ``metafunc.parametrize()`` that your scenario values
|
||||
should be considered class-scoped. With pytest-2.3 this leads to a
|
||||
resource-based ordering.
|
||||
|
||||
Deferring the setup of parametrized resources
|
||||
---------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
The parametrization of test functions happens at collection
|
||||
time. It is a good idea to setup expensive resources like DB
|
||||
connections or subprocess only when the actual test is run.
|
||||
Here is a simple example how you can achieve that, first
|
||||
the actual test requiring a ``db`` object::
|
||||
|
||||
# content of test_backends.py
|
||||
|
||||
import pytest
|
||||
def test_db_initialized(db):
|
||||
# a dummy test
|
||||
if db.__class__.__name__ == "DB2":
|
||||
pytest.fail("deliberately failing for demo purposes")
|
||||
|
||||
We can now add a test configuration that generates two invocations of
|
||||
the ``test_db_initialized`` function and also implements a factory that
|
||||
creates a database object for the actual test invocations::
|
||||
|
||||
# content of conftest.py
|
||||
import pytest
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
if 'db' in metafunc.fixturenames:
|
||||
metafunc.parametrize("db", ['d1', 'd2'], indirect=True)
|
||||
|
||||
class DB1:
|
||||
"one database object"
|
||||
class DB2:
|
||||
"alternative database object"
|
||||
|
||||
@pytest.fixture
|
||||
def db(request):
|
||||
if request.param == "d1":
|
||||
return DB1()
|
||||
elif request.param == "d2":
|
||||
return DB2()
|
||||
else:
|
||||
raise ValueError("invalid internal test config")
|
||||
|
||||
Let's first see how it looks like at collection time::
|
||||
|
||||
$ py.test test_backends.py --collectonly
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 2 items
|
||||
<Module 'test_backends.py'>
|
||||
<Function 'test_db_initialized[d1]'>
|
||||
<Function 'test_db_initialized[d2]'>
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
|
||||
And then when we run the test::
|
||||
|
||||
$ py.test -q test_backends.py
|
||||
.F
|
||||
================================= FAILURES =================================
|
||||
_________________________ test_db_initialized[d2] __________________________
|
||||
|
||||
db = <conftest.DB2 instance at 0x216ccb0>
|
||||
|
||||
def test_db_initialized(db):
|
||||
# a dummy test
|
||||
if db.__class__.__name__ == "DB2":
|
||||
> pytest.fail("deliberately failing for demo purposes")
|
||||
E Failed: deliberately failing for demo purposes
|
||||
|
||||
test_backends.py:6: Failed
|
||||
|
||||
The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
Parametrizing test methods through per-class configuration
|
||||
--------------------------------------------------------------
|
||||
|
||||
.. _`unittest parameterizer`: http://code.google.com/p/unittest-ext/source/browse/trunk/params.py
|
||||
|
||||
|
||||
Here is an example ``pytest_generate_function`` function implementing a
|
||||
parametrization scheme similar to Michael Foord's `unittest
|
||||
parameterizer`_ but in a lot less code::
|
||||
|
||||
# content of ./test_parametrize.py
|
||||
import pytest
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
# called once per each test function
|
||||
funcarglist = metafunc.cls.params[metafunc.function.__name__]
|
||||
argnames = list(funcarglist[0])
|
||||
metafunc.parametrize(argnames, [[funcargs[name] for name in argnames]
|
||||
for funcargs in funcarglist])
|
||||
|
||||
class TestClass:
|
||||
# a map specifying multiple argument sets for a test method
|
||||
params = {
|
||||
'test_equals': [dict(a=1, b=2), dict(a=3, b=3), ],
|
||||
'test_zerodivision': [dict(a=1, b=0), ],
|
||||
}
|
||||
|
||||
def test_equals(self, a, b):
|
||||
assert a == b
|
||||
|
||||
def test_zerodivision(self, a, b):
|
||||
pytest.raises(ZeroDivisionError, "a/b")
|
||||
|
||||
Our test generator looks up a class-level definition which specifies which
|
||||
argument sets to use for each test function. Let's run it::
|
||||
|
||||
$ py.test -q
|
||||
F..
|
||||
================================= FAILURES =================================
|
||||
________________________ TestClass.test_equals[1-2] ________________________
|
||||
|
||||
self = <test_parametrize.TestClass instance at 0x216e8c0>, a = 1, b = 2
|
||||
|
||||
def test_equals(self, a, b):
|
||||
> assert a == b
|
||||
E assert 1 == 2
|
||||
|
||||
test_parametrize.py:18: AssertionError
|
||||
|
||||
Indirect parametrization with multiple fixtures
|
||||
--------------------------------------------------------------
|
||||
|
||||
Here is a stripped down real-life example of using parametrized
|
||||
testing for testing serialization of objects between different python
|
||||
interpreters. We define a ``test_basic_objects`` function which
|
||||
is to be run with different sets of arguments for its three arguments:
|
||||
|
||||
* ``python1``: first python interpreter, run to pickle-dump an object to a file
|
||||
* ``python2``: second interpreter, run to pickle-load an object from a file
|
||||
* ``obj``: object to be dumped/loaded
|
||||
|
||||
.. literalinclude:: multipython.py
|
||||
|
||||
Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize)::
|
||||
|
||||
. $ py.test -rs -q multipython.py
|
||||
............sss............sss............sss............ssssssssssssssssss
|
||||
========================= short test summary info ==========================
|
||||
SKIP [27] /home/hpk/p/pytest/doc/en/example/multipython.py:21: 'python2.8' not found
|
||||
16
doc/en/example/py2py3/conftest.py
Normal file
16
doc/en/example/py2py3/conftest.py
Normal file
@@ -0,0 +1,16 @@
|
||||
import sys
|
||||
import pytest
|
||||
|
||||
py3 = sys.version_info[0] >= 3
|
||||
|
||||
class DummyCollector(pytest.collect.File):
|
||||
def collect(self):
|
||||
return []
|
||||
|
||||
def pytest_pycollect_makemodule(path, parent):
|
||||
bn = path.basename
|
||||
if "py3" in bn and not py3 or ("py2" in bn and py3):
|
||||
return DummyCollector(path, parent=parent)
|
||||
|
||||
|
||||
|
||||
7
doc/en/example/py2py3/test_py2.py
Normal file
7
doc/en/example/py2py3/test_py2.py
Normal file
@@ -0,0 +1,7 @@
|
||||
|
||||
def test_exception_syntax():
|
||||
try:
|
||||
0/0
|
||||
except ZeroDivisionError, e:
|
||||
pass
|
||||
|
||||
7
doc/en/example/py2py3/test_py3.py
Normal file
7
doc/en/example/py2py3/test_py3.py
Normal file
@@ -0,0 +1,7 @@
|
||||
|
||||
def test_exception_syntax():
|
||||
try:
|
||||
0/0
|
||||
except ZeroDivisionError as e:
|
||||
pass
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
Changing standard (Python) test discovery
|
||||
===============================================
|
||||
|
||||
changing directory recursion
|
||||
Changing directory recursion
|
||||
-----------------------------------------------------
|
||||
|
||||
You can set the :confval:`norecursedirs` option in an ini-file, for example your ``setup.cfg`` in the project root directory::
|
||||
@@ -14,7 +14,7 @@ This would tell py.test to not recurse into typical subversion or sphinx-build d
|
||||
|
||||
.. _`change naming conventions`:
|
||||
|
||||
change naming conventions
|
||||
Changing naming conventions
|
||||
-----------------------------------------------------
|
||||
|
||||
You can configure different naming conventions by setting
|
||||
@@ -43,8 +43,8 @@ then the test collection looks like this::
|
||||
|
||||
$ py.test --collectonly
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
|
||||
collecting ... collected 2 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 2 items
|
||||
<Module 'check_myapp.py'>
|
||||
<Class 'CheckMyApp'>
|
||||
<Instance '()'>
|
||||
@@ -53,7 +53,7 @@ then the test collection looks like this::
|
||||
|
||||
============================= in 0.01 seconds =============================
|
||||
|
||||
interpret cmdline arguments as Python packages
|
||||
Interpreting cmdline arguments as Python packages
|
||||
-----------------------------------------------------
|
||||
|
||||
You can use the ``--pyargs`` option to make py.test try
|
||||
@@ -75,15 +75,15 @@ Now a simple invocation of ``py.test NAME`` will check
|
||||
if NAME exists as an importable package/module and otherwise
|
||||
treat it as a filesystem path.
|
||||
|
||||
finding out what is collected
|
||||
Finding out what is collected
|
||||
-----------------------------------------------
|
||||
|
||||
You can always peek at the collection tree without running tests like this::
|
||||
|
||||
. $ py.test --collectonly pythoncollection.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
|
||||
collecting ... collected 3 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 3 items
|
||||
<Module 'pythoncollection.py'>
|
||||
<Function 'test_function'>
|
||||
<Class 'TestClass'>
|
||||
@@ -91,4 +91,56 @@ You can always peek at the collection tree without running tests like this::
|
||||
<Function 'test_method'>
|
||||
<Function 'test_anothermethod'>
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
|
||||
customizing test collection to find all .py files
|
||||
---------------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
You can easily instruct py.test to discover tests from every python file::
|
||||
|
||||
|
||||
# content of pytest.ini
|
||||
[pytest]
|
||||
python_files = *.py
|
||||
|
||||
However, many projects will have a ``setup.py`` which they don't want to be imported. Moreover, there may files only importable by a specific python version.
|
||||
For such cases you can dynamically define files to be ignored by listing
|
||||
them in a ``conftest.py`` file::
|
||||
|
||||
# content of conftest.py
|
||||
import sys
|
||||
|
||||
collect_ignore = ["setup.py"]
|
||||
if sys.version_info[0] > 2:
|
||||
collect_ignore.append("pkg/module_py2.py")
|
||||
|
||||
And then if you have a module file like this::
|
||||
|
||||
# content of pkg/module_py2.py
|
||||
def test_only_on_python2():
|
||||
try:
|
||||
assert 0
|
||||
except Exception, e:
|
||||
pass
|
||||
|
||||
and a setup.py dummy file like this::
|
||||
|
||||
# content of setup.py
|
||||
0/0 # will raise exeption if imported
|
||||
|
||||
then a pytest run on python2 will find the one test when run with a python2
|
||||
interpreters and will leave out the setup.py file::
|
||||
|
||||
$ py.test --collectonly
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 1 items
|
||||
<Module 'pkg/module_py2.py'>
|
||||
<Function 'test_only_on_python2'>
|
||||
|
||||
============================= in 0.01 seconds =============================
|
||||
|
||||
If you run with a Python3 interpreter the moduled added through the conftest.py file will not be considered for test collection.
|
||||
|
||||
@@ -13,8 +13,8 @@ get on the terminal - we are working on that):
|
||||
|
||||
assertion $ py.test failure_demo.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
|
||||
collecting ... collected 39 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 39 items
|
||||
|
||||
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
|
||||
|
||||
@@ -30,7 +30,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:15: AssertionError
|
||||
_________________________ TestFailing.test_simple __________________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0x14b9890>
|
||||
self = <failure_demo.TestFailing object at 0x11f1e50>
|
||||
|
||||
def test_simple(self):
|
||||
def f():
|
||||
@@ -40,13 +40,13 @@ get on the terminal - we are working on that):
|
||||
|
||||
> assert f() == g()
|
||||
E assert 42 == 43
|
||||
E + where 42 = <function f at 0x14a5e60>()
|
||||
E + and 43 = <function g at 0x14bc1b8>()
|
||||
E + where 42 = <function f at 0x121a320>()
|
||||
E + and 43 = <function g at 0x121a398>()
|
||||
|
||||
failure_demo.py:28: AssertionError
|
||||
____________________ TestFailing.test_simple_multiline _____________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0x14b9b50>
|
||||
self = <failure_demo.TestFailing object at 0x118e8d0>
|
||||
|
||||
def test_simple_multiline(self):
|
||||
otherfunc_multi(
|
||||
@@ -59,26 +59,26 @@ get on the terminal - we are working on that):
|
||||
a = 42, b = 54
|
||||
|
||||
def otherfunc_multi(a,b):
|
||||
assert (a ==
|
||||
> b)
|
||||
> assert (a ==
|
||||
b)
|
||||
E assert 42 == 54
|
||||
|
||||
failure_demo.py:12: AssertionError
|
||||
failure_demo.py:11: AssertionError
|
||||
___________________________ TestFailing.test_not ___________________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0x14b9790>
|
||||
self = <failure_demo.TestFailing object at 0x1186310>
|
||||
|
||||
def test_not(self):
|
||||
def f():
|
||||
return 42
|
||||
> assert not f()
|
||||
E assert not 42
|
||||
E + where 42 = <function f at 0x14bc398>()
|
||||
E + where 42 = <function f at 0x121a668>()
|
||||
|
||||
failure_demo.py:38: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_text _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x14aa810>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x11f2d90>
|
||||
|
||||
def test_eq_text(self):
|
||||
> assert 'spam' == 'eggs'
|
||||
@@ -89,7 +89,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:42: AssertionError
|
||||
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1576190>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x11e6b50>
|
||||
|
||||
def test_eq_similar_text(self):
|
||||
> assert 'foo 1 bar' == 'foo 2 bar'
|
||||
@@ -102,7 +102,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:45: AssertionError
|
||||
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x14a7450>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1351ad0>
|
||||
|
||||
def test_eq_multiline_text(self):
|
||||
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
|
||||
@@ -115,7 +115,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:48: AssertionError
|
||||
______________ TestSpecialisedExplanations.test_eq_long_text _______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x14b9350>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x118ee10>
|
||||
|
||||
def test_eq_long_text(self):
|
||||
a = '1'*100 + 'a' + '2'*100
|
||||
@@ -132,7 +132,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:53: AssertionError
|
||||
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x15764d0>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x11f5d50>
|
||||
|
||||
def test_eq_long_text_multiline(self):
|
||||
a = '1\n'*100 + 'a' + '2\n'*100
|
||||
@@ -156,7 +156,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:58: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_list _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1576350>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x118bc50>
|
||||
|
||||
def test_eq_list(self):
|
||||
> assert [0, 1, 2] == [0, 1, 3]
|
||||
@@ -166,7 +166,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:61: AssertionError
|
||||
______________ TestSpecialisedExplanations.test_eq_list_long _______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1576f10>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1186fd0>
|
||||
|
||||
def test_eq_list_long(self):
|
||||
a = [0]*100 + [1] + [3]*100
|
||||
@@ -178,7 +178,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:66: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_dict _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1576390>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x11f1d10>
|
||||
|
||||
def test_eq_dict(self):
|
||||
> assert {'a': 0, 'b': 1} == {'a': 0, 'b': 2}
|
||||
@@ -191,7 +191,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:69: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_set __________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x14bd790>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x118b290>
|
||||
|
||||
def test_eq_set(self):
|
||||
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
|
||||
@@ -207,7 +207,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:72: AssertionError
|
||||
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x157a7d0>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1351d90>
|
||||
|
||||
def test_eq_longer_list(self):
|
||||
> assert [1,2] == [1,2,3]
|
||||
@@ -217,7 +217,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:75: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_in_list _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x157ab50>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x11f5fd0>
|
||||
|
||||
def test_in_list(self):
|
||||
> assert 1 in [0, 2, 3, 4, 5]
|
||||
@@ -226,7 +226,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:78: AssertionError
|
||||
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x157a090>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x118ba10>
|
||||
|
||||
def test_not_in_text_multiline(self):
|
||||
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
|
||||
@@ -244,7 +244,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:82: AssertionError
|
||||
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x14aaa50>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1351c90>
|
||||
|
||||
def test_not_in_text_single(self):
|
||||
text = 'single foo line'
|
||||
@@ -257,7 +257,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:86: AssertionError
|
||||
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x157ab90>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x11f10d0>
|
||||
|
||||
def test_not_in_text_single_long(self):
|
||||
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
|
||||
@@ -270,7 +270,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:90: AssertionError
|
||||
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1576ed0>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1351d50>
|
||||
|
||||
def test_not_in_text_single_long_term(self):
|
||||
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
|
||||
@@ -289,7 +289,7 @@ get on the terminal - we are working on that):
|
||||
i = Foo()
|
||||
> assert i.b == 2
|
||||
E assert 1 == 2
|
||||
E + where 1 = <failure_demo.Foo object at 0x157a910>.b
|
||||
E + where 1 = <failure_demo.Foo object at 0x118b3d0>.b
|
||||
|
||||
failure_demo.py:101: AssertionError
|
||||
_________________________ test_attribute_instance __________________________
|
||||
@@ -299,8 +299,8 @@ get on the terminal - we are working on that):
|
||||
b = 1
|
||||
> assert Foo().b == 2
|
||||
E assert 1 == 2
|
||||
E + where 1 = <failure_demo.Foo object at 0x1584610>.b
|
||||
E + where <failure_demo.Foo object at 0x1584610> = <class 'failure_demo.Foo'>()
|
||||
E + where 1 = <failure_demo.Foo object at 0x11f5dd0>.b
|
||||
E + where <failure_demo.Foo object at 0x11f5dd0> = <class 'failure_demo.Foo'>()
|
||||
|
||||
failure_demo.py:107: AssertionError
|
||||
__________________________ test_attribute_failure __________________________
|
||||
@@ -316,7 +316,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:116:
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
|
||||
self = <failure_demo.Foo object at 0x157a3d0>
|
||||
self = <failure_demo.Foo object at 0x1186a10>
|
||||
|
||||
def _get_b(self):
|
||||
> raise Exception('Failed to get attrib')
|
||||
@@ -332,15 +332,15 @@ get on the terminal - we are working on that):
|
||||
b = 2
|
||||
> assert Foo().b == Bar().b
|
||||
E assert 1 == 2
|
||||
E + where 1 = <failure_demo.Foo object at 0x157a1d0>.b
|
||||
E + where <failure_demo.Foo object at 0x157a1d0> = <class 'failure_demo.Foo'>()
|
||||
E + and 2 = <failure_demo.Bar object at 0x157a9d0>.b
|
||||
E + where <failure_demo.Bar object at 0x157a9d0> = <class 'failure_demo.Bar'>()
|
||||
E + where 1 = <failure_demo.Foo object at 0x118eb50>.b
|
||||
E + where <failure_demo.Foo object at 0x118eb50> = <class 'failure_demo.Foo'>()
|
||||
E + and 2 = <failure_demo.Bar object at 0x118e090>.b
|
||||
E + where <failure_demo.Bar object at 0x118e090> = <class 'failure_demo.Bar'>()
|
||||
|
||||
failure_demo.py:124: AssertionError
|
||||
__________________________ TestRaises.test_raises __________________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x157d7e8>
|
||||
self = <failure_demo.TestRaises instance at 0x117fe60>
|
||||
|
||||
def test_raises(self):
|
||||
s = 'qwe'
|
||||
@@ -352,10 +352,10 @@ get on the terminal - we are working on that):
|
||||
> int(s)
|
||||
E ValueError: invalid literal for int() with base 10: 'qwe'
|
||||
|
||||
<0-codegen /home/hpk/p/pytest/_pytest/python.py:831>:1: ValueError
|
||||
<0-codegen /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:851>:1: ValueError
|
||||
______________________ TestRaises.test_raises_doesnt _______________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x158ae60>
|
||||
self = <failure_demo.TestRaises instance at 0x1204170>
|
||||
|
||||
def test_raises_doesnt(self):
|
||||
> raises(IOError, "int('3')")
|
||||
@@ -364,7 +364,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:136: Failed
|
||||
__________________________ TestRaises.test_raise ___________________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x158bb90>
|
||||
self = <failure_demo.TestRaises instance at 0x1214710>
|
||||
|
||||
def test_raise(self):
|
||||
> raise ValueError("demo error")
|
||||
@@ -373,7 +373,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:139: ValueError
|
||||
________________________ TestRaises.test_tupleerror ________________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x157cd40>
|
||||
self = <failure_demo.TestRaises instance at 0x12055f0>
|
||||
|
||||
def test_tupleerror(self):
|
||||
> a,b = [1]
|
||||
@@ -382,7 +382,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:142: ValueError
|
||||
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x157d488>
|
||||
self = <failure_demo.TestRaises instance at 0x11edef0>
|
||||
|
||||
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
|
||||
l = [1,2,3]
|
||||
@@ -395,11 +395,11 @@ get on the terminal - we are working on that):
|
||||
l is [1, 2, 3]
|
||||
________________________ TestRaises.test_some_error ________________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x158a7e8>
|
||||
self = <failure_demo.TestRaises instance at 0x11e3fc8>
|
||||
|
||||
def test_some_error(self):
|
||||
> if namenotexi:
|
||||
E NameError: global name 'namenotexi' is not defined
|
||||
E NameError: global name 'namenotexi' is not defined
|
||||
|
||||
failure_demo.py:150: NameError
|
||||
____________________ test_dynamic_compile_shows_nicely _____________________
|
||||
@@ -420,10 +420,10 @@ get on the terminal - we are working on that):
|
||||
> assert 1 == 0
|
||||
E assert 1 == 0
|
||||
|
||||
<2-codegen 'abc-123' /home/hpk/p/pytest/doc/example/assertion/failure_demo.py:162>:2: AssertionError
|
||||
<2-codegen 'abc-123' /home/hpk/p/pytest/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError
|
||||
____________________ TestMoreErrors.test_complex_error _____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x158f8c0>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x11e19e0>
|
||||
|
||||
def test_complex_error(self):
|
||||
def f():
|
||||
@@ -452,7 +452,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:5: AssertionError
|
||||
___________________ TestMoreErrors.test_z1_unpack_error ____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x158c998>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x11e1c68>
|
||||
|
||||
def test_z1_unpack_error(self):
|
||||
l = []
|
||||
@@ -462,7 +462,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:179: ValueError
|
||||
____________________ TestMoreErrors.test_z2_type_error _____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x15854d0>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x12048c0>
|
||||
|
||||
def test_z2_type_error(self):
|
||||
l = 3
|
||||
@@ -472,20 +472,19 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:183: TypeError
|
||||
______________________ TestMoreErrors.test_startswith ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x14b65a8>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x11e9560>
|
||||
|
||||
def test_startswith(self):
|
||||
s = "123"
|
||||
g = "456"
|
||||
> assert s.startswith(g)
|
||||
E assert False
|
||||
E + where False = <built-in method startswith of str object at 0x14902a0>('456')
|
||||
E + where <built-in method startswith of str object at 0x14902a0> = '123'.startswith
|
||||
E assert <built-in method startswith of str object at 0x11f99b8>('456')
|
||||
E + where <built-in method startswith of str object at 0x11f99b8> = '123'.startswith
|
||||
|
||||
failure_demo.py:188: AssertionError
|
||||
__________________ TestMoreErrors.test_startswith_nested ___________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x158d518>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1196bd8>
|
||||
|
||||
def test_startswith_nested(self):
|
||||
def f():
|
||||
@@ -493,39 +492,36 @@ get on the terminal - we are working on that):
|
||||
def g():
|
||||
return "456"
|
||||
> assert f().startswith(g())
|
||||
E assert False
|
||||
E + where False = <built-in method startswith of str object at 0x14902a0>('456')
|
||||
E + where <built-in method startswith of str object at 0x14902a0> = '123'.startswith
|
||||
E + where '123' = <function f at 0x15806e0>()
|
||||
E + and '456' = <function g at 0x1580aa0>()
|
||||
E assert <built-in method startswith of str object at 0x11f99b8>('456')
|
||||
E + where <built-in method startswith of str object at 0x11f99b8> = '123'.startswith
|
||||
E + where '123' = <function f at 0x120e938>()
|
||||
E + and '456' = <function g at 0x120e9b0>()
|
||||
|
||||
failure_demo.py:195: AssertionError
|
||||
_____________________ TestMoreErrors.test_global_func ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1593440>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x12147e8>
|
||||
|
||||
def test_global_func(self):
|
||||
> assert isinstance(globf(42), float)
|
||||
E assert False
|
||||
E + where False = isinstance(43, float)
|
||||
E + where 43 = globf(42)
|
||||
E assert isinstance(43, float)
|
||||
E + where 43 = globf(42)
|
||||
|
||||
failure_demo.py:198: AssertionError
|
||||
_______________________ TestMoreErrors.test_instance _______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x15952d8>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x11e2e60>
|
||||
|
||||
def test_instance(self):
|
||||
self.x = 6*7
|
||||
> assert self.x != 42
|
||||
E assert 42 != 42
|
||||
E + where 42 = 42
|
||||
E + where 42 = <failure_demo.TestMoreErrors instance at 0x15952d8>.x
|
||||
E + where 42 = <failure_demo.TestMoreErrors instance at 0x11e2e60>.x
|
||||
|
||||
failure_demo.py:202: AssertionError
|
||||
_______________________ TestMoreErrors.test_compare ________________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1593758>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1216170>
|
||||
|
||||
def test_compare(self):
|
||||
> assert globf(10) < 5
|
||||
@@ -535,7 +531,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:205: AssertionError
|
||||
_____________________ TestMoreErrors.test_try_finally ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x157cd88>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1205050>
|
||||
|
||||
def test_try_finally(self):
|
||||
x = 1
|
||||
@@ -544,4 +540,4 @@ get on the terminal - we are working on that):
|
||||
E assert 1 == 0
|
||||
|
||||
failure_demo.py:210: AssertionError
|
||||
======================== 39 failed in 0.23 seconds =========================
|
||||
======================== 39 failed in 0.21 seconds =========================
|
||||
@@ -1,10 +1,10 @@
|
||||
|
||||
.. highlightlang:: python
|
||||
|
||||
basic patterns and examples
|
||||
Basic patterns and examples
|
||||
==========================================================
|
||||
|
||||
pass different values to a test function, depending on command line options
|
||||
Pass different values to a test function, depending on command line options
|
||||
----------------------------------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
@@ -22,20 +22,22 @@ Here is a basic pattern how to achieve this::
|
||||
|
||||
|
||||
For this to work we need to add a command line option and
|
||||
provide the ``cmdopt`` through a :ref:`function argument <funcarg>` factory::
|
||||
provide the ``cmdopt`` through a :ref:`fixture function <fixture function>`::
|
||||
|
||||
# content of conftest.py
|
||||
import pytest
|
||||
|
||||
def pytest_addoption(parser):
|
||||
parser.addoption("--cmdopt", action="store", default="type1",
|
||||
help="my option: type1 or type2")
|
||||
|
||||
def pytest_funcarg__cmdopt(request):
|
||||
@pytest.fixture
|
||||
def cmdopt(request):
|
||||
return request.config.option.cmdopt
|
||||
|
||||
Let's run this without supplying our new command line option::
|
||||
Let's run this without supplying our new option::
|
||||
|
||||
$ py.test -q test_sample.py
|
||||
collecting ... collected 1 items
|
||||
F
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_answer ________________________________
|
||||
@@ -53,12 +55,10 @@ Let's run this without supplying our new command line option::
|
||||
test_sample.py:6: AssertionError
|
||||
----------------------------- Captured stdout ------------------------------
|
||||
first
|
||||
1 failed in 0.03 seconds
|
||||
|
||||
And now with supplying a command line option::
|
||||
|
||||
$ py.test -q --cmdopt=type2
|
||||
collecting ... collected 1 items
|
||||
F
|
||||
================================= FAILURES =================================
|
||||
_______________________________ test_answer ________________________________
|
||||
@@ -76,16 +76,13 @@ And now with supplying a command line option::
|
||||
test_sample.py:6: AssertionError
|
||||
----------------------------- Captured stdout ------------------------------
|
||||
second
|
||||
1 failed in 0.02 seconds
|
||||
|
||||
Ok, this completes the basic pattern. However, one often rather
|
||||
wants to process command line options outside of the test and
|
||||
rather pass in different or more complex objects. See the
|
||||
next example or refer to :ref:`mysetup` for more information
|
||||
on real-life examples.
|
||||
You can see that the command line option arrived in our test. This
|
||||
completes the basic pattern. However, one often rather wants to process
|
||||
command line options outside of the test and rather pass in different or
|
||||
more complex objects.
|
||||
|
||||
|
||||
dynamically adding command line options
|
||||
Dynamically adding command line options
|
||||
--------------------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
@@ -109,17 +106,14 @@ directory with the above conftest.py::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
|
||||
gw0 I / gw1 I / gw2 I / gw3 I
|
||||
gw0 [0] / gw1 [0] / gw2 [0] / gw3 [0]
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 0 items
|
||||
|
||||
scheduling tests via LoadScheduling
|
||||
|
||||
============================= in 0.52 seconds =============================
|
||||
============================= in 0.00 seconds =============================
|
||||
|
||||
.. _`excontrolskip`:
|
||||
|
||||
control skipping of tests according to command line option
|
||||
Control skipping of tests according to command line option
|
||||
--------------------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
@@ -156,12 +150,12 @@ and when running it will see a skipped "slow" test::
|
||||
|
||||
$ py.test -rs # "-rs" means report details on the little 's'
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
|
||||
collecting ... collected 2 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 2 items
|
||||
|
||||
test_module.py .s
|
||||
========================= short test summary info ==========================
|
||||
SKIP [1] /tmp/doc-exec-42/conftest.py:9: need --runslow option to run
|
||||
SKIP [1] /tmp/doc-exec-104/conftest.py:9: need --runslow option to run
|
||||
|
||||
=================== 1 passed, 1 skipped in 0.01 seconds ====================
|
||||
|
||||
@@ -169,14 +163,14 @@ Or run it including the ``slow`` marked test::
|
||||
|
||||
$ py.test --runslow
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
|
||||
collecting ... collected 2 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 2 items
|
||||
|
||||
test_module.py ..
|
||||
|
||||
========================= 2 passed in 0.01 seconds =========================
|
||||
|
||||
writing well integrated assertion helpers
|
||||
Writing well integrated assertion helpers
|
||||
--------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
@@ -203,7 +197,6 @@ unless the ``--fulltrace`` command line option is specified.
|
||||
Let's run our little function::
|
||||
|
||||
$ py.test -q test_checkconfig.py
|
||||
collecting ... collected 1 items
|
||||
F
|
||||
================================= FAILURES =================================
|
||||
______________________________ test_something ______________________________
|
||||
@@ -213,7 +206,6 @@ Let's run our little function::
|
||||
E Failed: not configured: 42
|
||||
|
||||
test_checkconfig.py:8: Failed
|
||||
1 failed in 0.02 seconds
|
||||
|
||||
Detect if running from within a py.test run
|
||||
--------------------------------------------------------------
|
||||
@@ -240,9 +232,9 @@ and then check for the ``sys._called_from_test`` flag::
|
||||
# called from within a test run
|
||||
else:
|
||||
# called "normally"
|
||||
|
||||
|
||||
accordingly in your application. It's also a good idea
|
||||
to rather use your own application module rather than ``sys``
|
||||
to use your own application module rather than ``sys``
|
||||
for handling flag.
|
||||
|
||||
Adding info to test report header
|
||||
@@ -261,9 +253,9 @@ which will add the string to the test header accordingly::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
project deps: mylib-1.1
|
||||
collecting ... collected 0 items
|
||||
collected 0 items
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
|
||||
@@ -275,7 +267,7 @@ information on e.g. the value of ``config.option.verbose`` so that
|
||||
you present more information appropriately::
|
||||
|
||||
# content of conftest.py
|
||||
|
||||
|
||||
def pytest_report_header(config):
|
||||
if config.option.verbose > 0:
|
||||
return ["info1: did you know that ...", "did you?"]
|
||||
@@ -284,7 +276,7 @@ which will add info only when run with "--v"::
|
||||
|
||||
$ py.test -v
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.6 -- pytest-2.0.3 -- /home/hpk/venv/0/bin/python
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
info1: did you know that ...
|
||||
did you?
|
||||
collecting ... collected 0 items
|
||||
@@ -295,7 +287,119 @@ and nothing when run plainly::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.6.6 -- pytest-2.0.3
|
||||
collecting ... collected 0 items
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 0 items
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
|
||||
profiling test duration
|
||||
--------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
.. versionadded: 2.2
|
||||
|
||||
If you have a slow running large test suite you might want to find
|
||||
out which tests are the slowest. Let's make an artifical test suite::
|
||||
|
||||
# content of test_some_are_slow.py
|
||||
|
||||
import time
|
||||
|
||||
def test_funcfast():
|
||||
pass
|
||||
|
||||
def test_funcslow1():
|
||||
time.sleep(0.1)
|
||||
|
||||
def test_funcslow2():
|
||||
time.sleep(0.2)
|
||||
|
||||
Now we can profile which test functions execute the slowest::
|
||||
|
||||
$ py.test --durations=3
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 3 items
|
||||
|
||||
test_some_are_slow.py ...
|
||||
|
||||
========================= slowest 3 test durations =========================
|
||||
0.20s call test_some_are_slow.py::test_funcslow2
|
||||
0.10s call test_some_are_slow.py::test_funcslow1
|
||||
0.00s call test_some_are_slow.py::test_funcfast
|
||||
========================= 3 passed in 0.31 seconds =========================
|
||||
|
||||
incremental testing - test steps
|
||||
---------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
Sometimes you may have a testing situation which consists of a series
|
||||
of test steps. If one step fails it makes no sense to execute further
|
||||
steps as they are all expected to fail anyway and their tracebacks
|
||||
add no insight. Here is a simple ``conftest.py`` file which introduces
|
||||
an ``incremental`` marker which is to be used on classes::
|
||||
|
||||
# content of conftest.py
|
||||
|
||||
import pytest
|
||||
|
||||
def pytest_runtest_makereport(item, call):
|
||||
if "incremental" in item.keywords:
|
||||
if call.excinfo is not None:
|
||||
parent = item.parent
|
||||
parent._previousfailed = item
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
if "incremental" in item.keywords:
|
||||
previousfailed = getattr(item.parent, "_previousfailed", None)
|
||||
if previousfailed is not None:
|
||||
pytest.xfail("previous test failed (%s)" %previousfailed.name)
|
||||
|
||||
These two hook implementations work together to abort incremental-marked
|
||||
tests in a class. Here is a test module example::
|
||||
|
||||
# content of test_step.py
|
||||
|
||||
import pytest
|
||||
|
||||
@pytest.mark.incremental
|
||||
class TestUserHandling:
|
||||
def test_login(self):
|
||||
pass
|
||||
def test_modification(self):
|
||||
assert 0
|
||||
def test_deletion(self):
|
||||
pass
|
||||
|
||||
def test_normal():
|
||||
pass
|
||||
|
||||
If we run this::
|
||||
|
||||
$ py.test -rx
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.3.2
|
||||
collected 4 items
|
||||
|
||||
test_step.py .Fx.
|
||||
|
||||
================================= FAILURES =================================
|
||||
____________________ TestUserHandling.test_modification ____________________
|
||||
|
||||
self = <test_step.TestUserHandling instance at 0x269efc8>
|
||||
|
||||
def test_modification(self):
|
||||
> assert 0
|
||||
E assert 0
|
||||
|
||||
test_step.py:9: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
XFAIL test_step.py::TestUserHandling::()::test_deletion
|
||||
reason: previous test failed (test_modification)
|
||||
============== 1 failed, 2 passed, 1 xfailed in 0.01 seconds ===============
|
||||
|
||||
We'll see that ``test_deletion`` was not executed because ``test_modification``
|
||||
failed. It is reported as an "expected failure".
|
||||
|
||||
@@ -3,12 +3,84 @@ Some Issues and Questions
|
||||
|
||||
.. note::
|
||||
|
||||
If you don't find an answer here, checkout the :ref:`contact channels`
|
||||
to get help.
|
||||
If you don't find an answer here, you may checkout
|
||||
`pytest Q&A at Stackoverflow <http://stackoverflow.com/search?q=pytest>`_
|
||||
or other :ref:`contact channels` to get help.
|
||||
|
||||
On naming, nosetests, licensing and magic
|
||||
------------------------------------------------
|
||||
|
||||
How does py.test relate to nose and unittest?
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
py.test and nose_ share basic philosophy when it comes
|
||||
to running and writing Python tests. In fact, you can run many tests
|
||||
written for nose with py.test. nose_ was originally created
|
||||
as a clone of ``py.test`` when py.test was in the ``0.8`` release
|
||||
cycle. Note that starting with pytest-2.0 support for running unittest
|
||||
test suites is majorly improved.
|
||||
|
||||
how does py.test relate to twisted's trial?
|
||||
++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
Since some time py.test has builtin support for supporting tests
|
||||
written using trial. It does not itself start a reactor, however,
|
||||
and does not handle Deferreds returned from a test in pytest style.
|
||||
If you are using trial's unittest.TestCase chances are that you can
|
||||
just run your tests even if you return Deferreds. In addition,
|
||||
there also is a dedicated `pytest-twisted
|
||||
<http://pypi.python.org/pypi/pytest-twisted`` plugin which allows to
|
||||
return deferreds from pytest-style tests, allowing to use
|
||||
:ref:`fixtures` and other features.
|
||||
|
||||
how does py.test work with Django?
|
||||
++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
In 2012, some work is going into the `pytest-django plugin <http://pypi.python.org/pypi/pytest-django>`_. It substitutes the usage of Django's
|
||||
``manage.py test`` and allows to use all pytest features_ most of which
|
||||
are not available from Django directly.
|
||||
|
||||
.. _features: features.html
|
||||
|
||||
|
||||
What's this "magic" with py.test? (historic notes)
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
Around 2007 (version ``0.8``) some people thought that py.test
|
||||
was using too much "magic". It had been part of the `pylib`_ which
|
||||
contains a lot of unreleated python library code. Around 2010 there
|
||||
was a major cleanup refactoring, which removed unused or deprecated code
|
||||
and resulted in the new ``pytest`` PyPI package which strictly contains
|
||||
only test-related code. This relese also brought a complete pluginification
|
||||
such that the core is around 300 lines of code and everything else is
|
||||
implemented in plugins. Thus ``pytest`` today is a small, universally runnable
|
||||
and customizable testing framework for Python. Note, however, that
|
||||
``pytest`` uses metaprogramming techniques and reading its source is
|
||||
thus likely not something for Python beginners.
|
||||
|
||||
A second "magic" issue was the assert statement debugging feature.
|
||||
Nowadays, py.test explicitely rewrites assert statements in test modules
|
||||
in order to provide more useful :ref:`assert feedback <assertfeedback>`.
|
||||
This completely avoids previous issues of confusing assertion-reporting.
|
||||
It also means, that you can use Python's ``-O`` optimization without loosing
|
||||
assertions in test modules.
|
||||
|
||||
py.test contains a second mostly obsolete assert debugging technique,
|
||||
invoked via ``--assert=reinterpret``, activated by default on
|
||||
Python-2.5: When an ``assert`` statement fails, py.test re-interprets
|
||||
the expression part to show intermediate values. This technique suffers
|
||||
from a caveat that the rewriting does not: If your expression has side
|
||||
effects (better to avoid them anyway!) the intermediate values may not
|
||||
be the same, confusing the reinterpreter and obfuscating the initial
|
||||
error (this is also explained at the command line if it happens).
|
||||
|
||||
You can also turn off all assertion interaction using the
|
||||
``--assertmode=off`` option.
|
||||
|
||||
.. _`py namespaces`: index.html
|
||||
.. _`py/__init__.py`: http://bitbucket.org/hpk42/py-trunk/src/trunk/py/__init__.py
|
||||
|
||||
|
||||
Why a ``py.test`` instead of a ``pytest`` command?
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
@@ -18,55 +90,14 @@ utilities, all starting with ``py.<TAB>``, thus providing nice
|
||||
TAB-completion. If
|
||||
you install ``pip install pycmd`` you get these tools from a separate
|
||||
package. These days the command line tool could be called ``pytest``
|
||||
since many people have gotten used to the old name and there
|
||||
but since many people have gotten used to the old name and there
|
||||
is another tool named "pytest" we just decided to stick with
|
||||
``py.test``.
|
||||
``py.test`` for now.
|
||||
|
||||
How does py.test relate to nose and unittest?
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
py.test and nose_ share basic philosophy when it comes
|
||||
to running and writing Python tests. In fact, you can run many tests
|
||||
written for nose with py.test. nose_ was originally created
|
||||
as a clone of ``py.test`` when py.test was in the ``0.8`` release
|
||||
cycle. Note that starting with pytest-2.0 support for running unittest
|
||||
test suites is majorly improved and you should be able to run
|
||||
many Django and Twisted test suites without modification.
|
||||
|
||||
.. _features: test/features.html
|
||||
|
||||
|
||||
What's this "magic" with py.test?
|
||||
++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
Around 2007 (version ``0.8``) some people claimed that py.test
|
||||
was using too much "magic". Partly this has been fixed by removing
|
||||
unused, deprecated or complicated code. It is today probably one
|
||||
of the smallest, most universally runnable and most
|
||||
customizable testing frameworks for Python. However,
|
||||
``py.test`` still uses many metaprogramming techniques and
|
||||
reading its source is thus likely not something for Python beginners.
|
||||
|
||||
A second "magic" issue arguably the assert statement debugging feature. When
|
||||
loading test modules py.test rewrites the source code of assert statements. When
|
||||
a rewritten assert statement fails, its error message has more information than
|
||||
the original. py.test also has a second assert debugging technique. When an
|
||||
``assert`` statement that was missed by the rewriter fails, py.test
|
||||
re-interprets the expression to show intermediate values if a test fails. This
|
||||
second technique suffers from caveat that the rewriting does not: If your
|
||||
expression has side effects (better to avoid them anyway!) the intermediate
|
||||
values may not be the same, confusing the reinterpreter and obfuscating the
|
||||
initial error (this is also explained at the command line if it happens).
|
||||
You can turn off all assertion debugging with ``py.test --assertmode=off``.
|
||||
|
||||
.. _`py namespaces`: index.html
|
||||
.. _`py/__init__.py`: http://bitbucket.org/hpk42/py-trunk/src/trunk/py/__init__.py
|
||||
|
||||
|
||||
function arguments, parametrized tests and setup
|
||||
Function arguments, parametrized tests and setup
|
||||
-------------------------------------------------------
|
||||
|
||||
.. _funcargs: test/funcargs.html
|
||||
.. _funcargs: funcargs.html
|
||||
|
||||
Is using funcarg- versus xUnit setup a style question?
|
||||
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
@@ -80,9 +111,9 @@ code (like e.g. the monkeypatch_, the tmpdir_ or capture_ funcargs)
|
||||
because the support code can register setup/teardown functions
|
||||
in a managed class/module/function scope.
|
||||
|
||||
.. _monkeypatch: test/plugin/monkeypatch.html
|
||||
.. _tmpdir: test/plugin/tmpdir.html
|
||||
.. _capture: test/plugin/capture.html
|
||||
.. _monkeypatch: monkeypatch.html
|
||||
.. _tmpdir: tmpdir.html
|
||||
.. _capture: capture.html
|
||||
|
||||
.. _`why pytest_pyfuncarg__ methods?`:
|
||||
|
||||
@@ -91,13 +122,18 @@ Why the ``pytest_funcarg__*`` name for funcarg factories?
|
||||
|
||||
We like `Convention over Configuration`_ and didn't see much point
|
||||
in allowing a more flexible or abstract mechanism. Moreover,
|
||||
is is nice to be able to search for ``pytest_funcarg__MYARG`` in
|
||||
a source code and safely find all factory functions for
|
||||
it is nice to be able to search for ``pytest_funcarg__MYARG`` in
|
||||
source code and safely find all factory functions for
|
||||
the ``MYARG`` function argument.
|
||||
|
||||
.. note::
|
||||
|
||||
With pytest-2.3 you can use the :ref:`@pytest.fixture` decorator
|
||||
to mark a function as a fixture function.
|
||||
|
||||
.. _`Convention over Configuration`: http://en.wikipedia.org/wiki/Convention_over_Configuration
|
||||
|
||||
Can I yield multiple values from a funcarg factory function?
|
||||
Can I yield multiple values from a fixture function function?
|
||||
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||
|
||||
There are two conceptual reasons why yielding from a factory function
|
||||
@@ -113,8 +149,12 @@ is not possible:
|
||||
policy - in real-world examples some combinations
|
||||
often should not run.
|
||||
|
||||
Use the `pytest_generate_tests`_ hook to solve both issues
|
||||
and implement the `parametrization scheme of your choice`_.
|
||||
However, with pytest-2.3 you can use the :ref:`@pytest.fixture` decorator
|
||||
and specify ``params`` so that all tests depending on the factory-created
|
||||
resource will run multiple times with different parameters.
|
||||
|
||||
You can also use the `pytest_generate_tests`_ hook to
|
||||
implement the `parametrization scheme of your choice`_.
|
||||
|
||||
.. _`pytest_generate_tests`: test/funcargs.html#parametrizing-tests
|
||||
.. _`parametrization scheme of your choice`: http://tetamap.wordpress.com/2009/05/13/parametrizing-python-tests-generalized/
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user