From 0780f2573faf10b1ce52fd6d41bdaf44fb2fbb4b Mon Sep 17 00:00:00 2001 From: holger krekel Date: Thu, 3 Oct 2013 19:09:18 +0200 Subject: [PATCH] bump version to 2.4.2, regen docs --- CHANGELOG | 5 +- doc/en/announce/index.txt | 2 + doc/en/announce/release-2.4.2.txt | 32 ++++++++ doc/en/assert.txt | 5 +- doc/en/attic_fixtures.txt | 2 + doc/en/builtin.txt | 1 + doc/en/capture.txt | 4 +- doc/en/doctest.txt | 4 +- doc/en/example/markers.txt | 25 ++++--- doc/en/example/nonpython.txt | 12 +-- doc/en/example/parametrize.txt | 21 ++++-- doc/en/example/pythoncollection.txt | 6 +- doc/en/example/reportingdemo.txt | 110 ++++++++++++++-------------- doc/en/example/simple.txt | 59 ++++++++------- doc/en/example/special.txt | 9 ++- doc/en/fixture.txt | 49 ++++++------- doc/en/getting-started.txt | 13 ++-- doc/en/parametrize.txt | 33 +++++---- doc/en/skipping.txt | 8 +- doc/en/tmpdir.txt | 6 +- doc/en/unittest.txt | 7 +- doc/en/usage.txt | 2 +- 22 files changed, 230 insertions(+), 185 deletions(-) create mode 100644 doc/en/announce/release-2.4.2.txt diff --git a/CHANGELOG b/CHANGELOG index 561649b73..651c46657 100644 --- a/CHANGELOG +++ b/CHANGELOG @@ -8,9 +8,6 @@ Changes between 2.4.1 and 2.4.2 cause wrong matches because of an internal implementation quirk (don't ask) which is now properly implemented. fixes issue345. -- avoid "IOError: Bad Filedescriptor" on pytest shutdown by not closing - the internal dupped stdout (fix is slightly hand-wavy but works). - - avoid tmpdir fixture to create too long filenames especially when parametrization is used (issue354) @@ -25,7 +22,7 @@ Changes between 2.4.1 and 2.4.2 docs. -- remove attempt to "dup" stdout at startup. +- remove attempt to "dup" stdout at startup as it's icky. the normal capturing should catch enough possibilities of tests messing up standard FDs. diff --git a/doc/en/announce/index.txt b/doc/en/announce/index.txt index 06e1a5065..a7e78b601 100644 --- a/doc/en/announce/index.txt +++ b/doc/en/announce/index.txt @@ -5,6 +5,8 @@ Release announcements .. toctree:: :maxdepth: 2 + release-2.4.2 + release-2.4.1 release-2.4.0 release-2.3.5 release-2.3.4 diff --git a/doc/en/announce/release-2.4.2.txt b/doc/en/announce/release-2.4.2.txt new file mode 100644 index 000000000..cd35cb0f1 --- /dev/null +++ b/doc/en/announce/release-2.4.2.txt @@ -0,0 +1,32 @@ +pytest-2.4.2: colorama on windows, plugin/tmpdir fixes +=========================================================================== + +pytest-2.4.2 is another bug-fixing release: + +- fix "-k" matching of tests where "repr" and "attr" and other names would + cause wrong matches because of an internal implementation quirk + (don't ask) which is now properly implemented. fixes issue345. + +- avoid tmpdir fixture to create too long filenames especially + when parametrization is used (issue354) + +- fix pytest-pep8 and pytest-flakes / pytest interactions + (collection names in mark plugin was assuming an item always + has a function which is not true for those plugins etc.) + Thanks Andi Zeidler. + +- introduce node.get_marker/node.add_marker API for plugins + like pytest-pep8 and pytest-flakes to avoid the messy + details of the node.keywords pseudo-dicts. Adapated + docs. + +- remove attempt to "dup" stdout at startup as it's icky. + the normal capturing should catch enough possibilities + of tests messing up standard FDs. + +as usual, docs at http://pytest.org and upgrades via:: + + pip install -U pytest + +have fun, +holger krekel diff --git a/doc/en/assert.txt b/doc/en/assert.txt index 247522c45..d42800a75 100644 --- a/doc/en/assert.txt +++ b/doc/en/assert.txt @@ -26,7 +26,7 @@ you will see the return value of the function call:: $ py.test test_assert1.py =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 1 items test_assert1.py F @@ -116,7 +116,7 @@ if you run this module:: $ py.test test_assert2.py =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 1 items test_assert2.py F @@ -191,6 +191,7 @@ the conftest file:: E vals: 1 != 2 test_foocompare.py:8: AssertionError + 1 failed in 0.01 seconds .. _assert-details: .. _`assert introspection`: diff --git a/doc/en/attic_fixtures.txt b/doc/en/attic_fixtures.txt index df14030e4..8b796a637 100644 --- a/doc/en/attic_fixtures.txt +++ b/doc/en/attic_fixtures.txt @@ -129,6 +129,7 @@ Let's run this module without output-capturing:: E NameError: global name 'globresource' is not defined test_glob.py:5: NameError + 2 failed in 0.01 seconds The two tests see the same global ``globresource`` object. @@ -177,6 +178,7 @@ And then re-run our test module:: E NameError: global name 'globresource' is not defined test_glob.py:5: NameError + 2 failed in 0.01 seconds We are now running the two tests twice with two different global resource instances. Note that the tests are ordered such that only diff --git a/doc/en/builtin.txt b/doc/en/builtin.txt index f06882ce1..368d99fde 100644 --- a/doc/en/builtin.txt +++ b/doc/en/builtin.txt @@ -120,3 +120,4 @@ You can ask for available builtin or project-custom path object. + in 0.00 seconds diff --git a/doc/en/capture.txt b/doc/en/capture.txt index 639412857..003347469 100644 --- a/doc/en/capture.txt +++ b/doc/en/capture.txt @@ -64,7 +64,7 @@ of the failing function and hide the other one:: $ py.test =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 2 items test_module.py .F @@ -78,7 +78,7 @@ of the failing function and hide the other one:: test_module.py:9: AssertionError ----------------------------- Captured stdout ------------------------------ - setting up + setting up ==================== 1 failed, 1 passed in 0.01 seconds ==================== Accessing captured output from a test function diff --git a/doc/en/doctest.txt b/doc/en/doctest.txt index dc1c125f5..0caf73747 100644 --- a/doc/en/doctest.txt +++ b/doc/en/doctest.txt @@ -44,12 +44,12 @@ then you can just invoke ``py.test`` without command line options:: $ py.test =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 1 items mymodule.py . - ========================= 1 passed in 0.02 seconds ========================= + ========================= 1 passed in 0.01 seconds ========================= It is possible to use fixtures using the ``getfixture`` helper:: diff --git a/doc/en/example/markers.txt b/doc/en/example/markers.txt index 04b790cf7..55f9e6e3e 100644 --- a/doc/en/example/markers.txt +++ b/doc/en/example/markers.txt @@ -28,7 +28,7 @@ You can then restrict a test run to only run tests marked with ``webtest``:: $ py.test -v -m webtest =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python collecting ... collected 3 items test_server.py:3: test_send_http PASSED @@ -40,7 +40,7 @@ Or the inverse, running all tests except the webtest ones:: $ py.test -v -m "not webtest" =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python collecting ... collected 3 items test_server.py:6: test_something_quick PASSED @@ -61,7 +61,7 @@ select tests based on their names:: $ py.test -v -k http # running with the above defined example module =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python collecting ... collected 3 items test_server.py:3: test_send_http PASSED @@ -73,7 +73,7 @@ And you can also run all tests except the ones that match the keyword:: $ py.test -k "not send_http" -v =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python collecting ... collected 3 items test_server.py:6: test_something_quick PASSED @@ -86,7 +86,7 @@ Or to select "http" and "quick" tests:: $ py.test -k "http or quick" -v =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python collecting ... collected 3 items test_server.py:3: test_send_http PASSED @@ -255,7 +255,7 @@ the test needs:: $ py.test -E stage2 =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 1 items test_someenv.py s @@ -266,7 +266,7 @@ and here is one that specifies exactly the environment needed:: $ py.test -E stage1 =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 1 items test_someenv.py . @@ -331,6 +331,7 @@ Let's run this without capturing output and see what we get:: glob args=('class',) kwargs={'x': 2} glob args=('module',) kwargs={'x': 1} . + 1 passed in 0.01 seconds marking platform specific tests with pytest -------------------------------------------------------------- @@ -383,12 +384,12 @@ then you will see two test skipped and two executed tests as expected:: $ py.test -rs # this option reports skip reasons =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 4 items test_plat.py s.s. ========================= short test summary info ========================== - SKIP [2] /tmp/doc-exec-273/conftest.py:12: cannot run on platform linux2 + SKIP [2] /tmp/doc-exec-598/conftest.py:12: cannot run on platform linux2 =================== 2 passed, 2 skipped in 0.01 seconds ==================== @@ -396,7 +397,7 @@ Note that if you specify a platform via the marker-command line option like this $ py.test -m linux2 =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 4 items test_plat.py . @@ -447,7 +448,7 @@ We can now use the ``-m option`` to select one set:: $ py.test -m interface --tb=short =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 4 items test_module.py FF @@ -468,7 +469,7 @@ or to select both "event" and "interface" tests:: $ py.test -m "interface or event" --tb=short =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 4 items test_module.py FFF diff --git a/doc/en/example/nonpython.txt b/doc/en/example/nonpython.txt index 632da4478..9094e5c4f 100644 --- a/doc/en/example/nonpython.txt +++ b/doc/en/example/nonpython.txt @@ -27,7 +27,7 @@ now execute the test specification:: nonpython $ py.test test_simple.yml =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 2 items test_simple.yml .F @@ -37,7 +37,7 @@ now execute the test specification:: usecase execution failed spec failed: 'some': 'other' no further details known at this point. - ==================== 1 failed, 1 passed in 0.05 seconds ==================== + ==================== 1 failed, 1 passed in 0.03 seconds ==================== You get one dot for the passing ``sub1: sub1`` check and one failure. Obviously in the above ``conftest.py`` you'll want to implement a more @@ -56,7 +56,7 @@ consulted when reporting in ``verbose`` mode:: nonpython $ py.test -v =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python collecting ... collected 2 items test_simple.yml:1: usecase: ok PASSED @@ -67,17 +67,17 @@ consulted when reporting in ``verbose`` mode:: usecase execution failed spec failed: 'some': 'other' no further details known at this point. - ==================== 1 failed, 1 passed in 0.05 seconds ==================== + ==================== 1 failed, 1 passed in 0.03 seconds ==================== While developing your custom test collection and execution it's also interesting to just look at the collection tree:: nonpython $ py.test --collect-only =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 2 items - ============================= in 0.05 seconds ============================= + ============================= in 0.03 seconds ============================= diff --git a/doc/en/example/parametrize.txt b/doc/en/example/parametrize.txt index f6a3cb263..66c398d8a 100644 --- a/doc/en/example/parametrize.txt +++ b/doc/en/example/parametrize.txt @@ -46,6 +46,7 @@ This means that we only run 2 tests if we do not pass ``--all``:: $ py.test -q test_compute.py .. + 2 passed in 0.01 seconds We run only two computations, so we see two dots. let's run the full monty:: @@ -62,6 +63,7 @@ let's run the full monty:: E assert 4 < 4 test_compute.py:3: AssertionError + 1 failed, 4 passed in 0.01 seconds As expected when running the full range of ``param1`` values we'll get an error on the last one. @@ -104,7 +106,7 @@ this is a fully self-contained example which you can run with:: $ py.test test_scenarios.py =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 4 items test_scenarios.py .... @@ -116,7 +118,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia $ py.test --collect-only test_scenarios.py =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 4 items @@ -180,7 +182,7 @@ Let's first see how it looks like at collection time:: $ py.test test_backends.py --collect-only =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 2 items @@ -195,7 +197,7 @@ And then when we run the test:: ================================= FAILURES ================================= _________________________ test_db_initialized[d2] __________________________ - db = + db = def test_db_initialized(db): # a dummy test @@ -204,6 +206,7 @@ And then when we run the test:: E Failed: deliberately failing for demo purposes test_backends.py:6: Failed + 1 failed, 1 passed in 0.01 seconds The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed. Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase. @@ -250,13 +253,14 @@ argument sets to use for each test function. Let's run it:: ================================= FAILURES ================================= ________________________ TestClass.test_equals[1-2] ________________________ - self = , a = 1, b = 2 + self = , a = 1, b = 2 def test_equals(self, a, b): > assert a == b E assert 1 == 2 test_parametrize.py:18: AssertionError + 1 failed, 2 passed in 0.02 seconds Indirect parametrization with multiple fixtures -------------------------------------------------------------- @@ -278,6 +282,7 @@ Running it results in some skips if we don't have all the python interpreters in ............sss............sss............sss............ssssssssssssssssss ========================= short test summary info ========================== SKIP [27] /home/hpk/p/pytest/doc/en/example/multipython.py:21: 'python2.8' not found + 48 passed, 27 skipped in 1.37 seconds Indirect parametrization of optional implementations/imports -------------------------------------------------------------------- @@ -324,12 +329,12 @@ If you run this with reporting for skips enabled:: $ py.test -rs test_module.py =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 2 items - test_module.py .s + test_module.py s. ========================= short test summary info ========================== - SKIP [1] /tmp/doc-exec-275/conftest.py:10: could not import 'opt2' + SKIP [1] /tmp/doc-exec-600/conftest.py:10: could not import 'opt2' =================== 1 passed, 1 skipped in 0.01 seconds ==================== diff --git a/doc/en/example/pythoncollection.txt b/doc/en/example/pythoncollection.txt index 2f65b1faf..abf6340c0 100644 --- a/doc/en/example/pythoncollection.txt +++ b/doc/en/example/pythoncollection.txt @@ -43,7 +43,7 @@ then the test collection looks like this:: $ py.test --collect-only =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 2 items @@ -82,7 +82,7 @@ You can always peek at the collection tree without running tests like this:: . $ py.test --collect-only pythoncollection.py =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 3 items @@ -135,7 +135,7 @@ interpreters and will leave out the setup.py file:: $ py.test --collect-only =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 1 items diff --git a/doc/en/example/reportingdemo.txt b/doc/en/example/reportingdemo.txt index 71e875270..e0160e6fa 100644 --- a/doc/en/example/reportingdemo.txt +++ b/doc/en/example/reportingdemo.txt @@ -13,7 +13,7 @@ get on the terminal - we are working on that): assertion $ py.test failure_demo.py =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 39 items failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF @@ -30,7 +30,7 @@ get on the terminal - we are working on that): failure_demo.py:15: AssertionError _________________________ TestFailing.test_simple __________________________ - self = + self = def test_simple(self): def f(): @@ -40,13 +40,13 @@ get on the terminal - we are working on that): > assert f() == g() E assert 42 == 43 - E + where 42 = () - E + and 43 = () + E + where 42 = () + E + and 43 = () failure_demo.py:28: AssertionError ____________________ TestFailing.test_simple_multiline _____________________ - self = + self = def test_simple_multiline(self): otherfunc_multi( @@ -66,19 +66,19 @@ get on the terminal - we are working on that): failure_demo.py:11: AssertionError ___________________________ TestFailing.test_not ___________________________ - self = + self = def test_not(self): def f(): return 42 > assert not f() E assert not 42 - E + where 42 = () + E + where 42 = () failure_demo.py:38: AssertionError _________________ TestSpecialisedExplanations.test_eq_text _________________ - self = + self = def test_eq_text(self): > assert 'spam' == 'eggs' @@ -89,7 +89,7 @@ get on the terminal - we are working on that): failure_demo.py:42: AssertionError _____________ TestSpecialisedExplanations.test_eq_similar_text _____________ - self = + self = def test_eq_similar_text(self): > assert 'foo 1 bar' == 'foo 2 bar' @@ -102,7 +102,7 @@ get on the terminal - we are working on that): failure_demo.py:45: AssertionError ____________ TestSpecialisedExplanations.test_eq_multiline_text ____________ - self = + self = def test_eq_multiline_text(self): > assert 'foo\nspam\nbar' == 'foo\neggs\nbar' @@ -115,7 +115,7 @@ get on the terminal - we are working on that): failure_demo.py:48: AssertionError ______________ TestSpecialisedExplanations.test_eq_long_text _______________ - self = + self = def test_eq_long_text(self): a = '1'*100 + 'a' + '2'*100 @@ -132,7 +132,7 @@ get on the terminal - we are working on that): failure_demo.py:53: AssertionError _________ TestSpecialisedExplanations.test_eq_long_text_multiline __________ - self = + self = def test_eq_long_text_multiline(self): a = '1\n'*100 + 'a' + '2\n'*100 @@ -156,7 +156,7 @@ get on the terminal - we are working on that): failure_demo.py:58: AssertionError _________________ TestSpecialisedExplanations.test_eq_list _________________ - self = + self = def test_eq_list(self): > assert [0, 1, 2] == [0, 1, 3] @@ -166,7 +166,7 @@ get on the terminal - we are working on that): failure_demo.py:61: AssertionError ______________ TestSpecialisedExplanations.test_eq_list_long _______________ - self = + self = def test_eq_list_long(self): a = [0]*100 + [1] + [3]*100 @@ -178,12 +178,12 @@ get on the terminal - we are working on that): failure_demo.py:66: AssertionError _________________ TestSpecialisedExplanations.test_eq_dict _________________ - self = + self = def test_eq_dict(self): > assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0} E assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0} - E Hiding 1 identical items, use -v to show + E Omitting 1 identical items, use -v to show E Differing items: E {'b': 1} != {'b': 2} E Left contains more items: @@ -194,7 +194,7 @@ get on the terminal - we are working on that): failure_demo.py:69: AssertionError _________________ TestSpecialisedExplanations.test_eq_set __________________ - self = + self = def test_eq_set(self): > assert set([0, 10, 11, 12]) == set([0, 20, 21]) @@ -210,7 +210,7 @@ get on the terminal - we are working on that): failure_demo.py:72: AssertionError _____________ TestSpecialisedExplanations.test_eq_longer_list ______________ - self = + self = def test_eq_longer_list(self): > assert [1,2] == [1,2,3] @@ -220,7 +220,7 @@ get on the terminal - we are working on that): failure_demo.py:75: AssertionError _________________ TestSpecialisedExplanations.test_in_list _________________ - self = + self = def test_in_list(self): > assert 1 in [0, 2, 3, 4, 5] @@ -229,7 +229,7 @@ get on the terminal - we are working on that): failure_demo.py:78: AssertionError __________ TestSpecialisedExplanations.test_not_in_text_multiline __________ - self = + self = def test_not_in_text_multiline(self): text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail' @@ -247,7 +247,7 @@ get on the terminal - we are working on that): failure_demo.py:82: AssertionError ___________ TestSpecialisedExplanations.test_not_in_text_single ____________ - self = + self = def test_not_in_text_single(self): text = 'single foo line' @@ -260,7 +260,7 @@ get on the terminal - we are working on that): failure_demo.py:86: AssertionError _________ TestSpecialisedExplanations.test_not_in_text_single_long _________ - self = + self = def test_not_in_text_single_long(self): text = 'head ' * 50 + 'foo ' + 'tail ' * 20 @@ -273,7 +273,7 @@ get on the terminal - we are working on that): failure_demo.py:90: AssertionError ______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______ - self = + self = def test_not_in_text_single_long_term(self): text = 'head ' * 50 + 'f'*70 + 'tail ' * 20 @@ -292,7 +292,7 @@ get on the terminal - we are working on that): i = Foo() > assert i.b == 2 E assert 1 == 2 - E + where 1 = .b + E + where 1 = .b failure_demo.py:101: AssertionError _________________________ test_attribute_instance __________________________ @@ -302,8 +302,8 @@ get on the terminal - we are working on that): b = 1 > assert Foo().b == 2 E assert 1 == 2 - E + where 1 = .b - E + where = () + E + where 1 = .b + E + where = () failure_demo.py:107: AssertionError __________________________ test_attribute_failure __________________________ @@ -319,7 +319,7 @@ get on the terminal - we are working on that): failure_demo.py:116: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ - self = + self = def _get_b(self): > raise Exception('Failed to get attrib') @@ -335,15 +335,15 @@ get on the terminal - we are working on that): b = 2 > assert Foo().b == Bar().b E assert 1 == 2 - E + where 1 = .b - E + where = () - E + and 2 = .b - E + where = () + E + where 1 = .b + E + where = () + E + and 2 = .b + E + where = () failure_demo.py:124: AssertionError __________________________ TestRaises.test_raises __________________________ - self = + self = def test_raises(self): s = 'qwe' @@ -355,10 +355,10 @@ get on the terminal - we are working on that): > int(s) E ValueError: invalid literal for int() with base 10: 'qwe' - <0-codegen /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:858>:1: ValueError + <0-codegen /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:905>:1: ValueError ______________________ TestRaises.test_raises_doesnt _______________________ - self = + self = def test_raises_doesnt(self): > raises(IOError, "int('3')") @@ -367,7 +367,7 @@ get on the terminal - we are working on that): failure_demo.py:136: Failed __________________________ TestRaises.test_raise ___________________________ - self = + self = def test_raise(self): > raise ValueError("demo error") @@ -376,7 +376,7 @@ get on the terminal - we are working on that): failure_demo.py:139: ValueError ________________________ TestRaises.test_tupleerror ________________________ - self = + self = def test_tupleerror(self): > a,b = [1] @@ -385,7 +385,7 @@ get on the terminal - we are working on that): failure_demo.py:142: ValueError ______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______ - self = + self = def test_reinterpret_fails_with_print_for_the_fun_of_it(self): l = [1,2,3] @@ -398,7 +398,7 @@ get on the terminal - we are working on that): l is [1, 2, 3] ________________________ TestRaises.test_some_error ________________________ - self = + self = def test_some_error(self): > if namenotexi: @@ -426,7 +426,7 @@ get on the terminal - we are working on that): <2-codegen 'abc-123' /home/hpk/p/pytest/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError ____________________ TestMoreErrors.test_complex_error _____________________ - self = + self = def test_complex_error(self): def f(): @@ -455,7 +455,7 @@ get on the terminal - we are working on that): failure_demo.py:5: AssertionError ___________________ TestMoreErrors.test_z1_unpack_error ____________________ - self = + self = def test_z1_unpack_error(self): l = [] @@ -465,7 +465,7 @@ get on the terminal - we are working on that): failure_demo.py:179: ValueError ____________________ TestMoreErrors.test_z2_type_error _____________________ - self = + self = def test_z2_type_error(self): l = 3 @@ -475,19 +475,19 @@ get on the terminal - we are working on that): failure_demo.py:183: TypeError ______________________ TestMoreErrors.test_startswith ______________________ - self = + self = def test_startswith(self): s = "123" g = "456" > assert s.startswith(g) - E assert ('456') - E + where = '123'.startswith + E assert ('456') + E + where = '123'.startswith failure_demo.py:188: AssertionError __________________ TestMoreErrors.test_startswith_nested ___________________ - self = + self = def test_startswith_nested(self): def f(): @@ -495,15 +495,15 @@ get on the terminal - we are working on that): def g(): return "456" > assert f().startswith(g()) - E assert ('456') - E + where = '123'.startswith - E + where '123' = () - E + and '456' = () + E assert ('456') + E + where = '123'.startswith + E + where '123' = () + E + and '456' = () failure_demo.py:195: AssertionError _____________________ TestMoreErrors.test_global_func ______________________ - self = + self = def test_global_func(self): > assert isinstance(globf(42), float) @@ -513,18 +513,18 @@ get on the terminal - we are working on that): failure_demo.py:198: AssertionError _______________________ TestMoreErrors.test_instance _______________________ - self = + self = def test_instance(self): self.x = 6*7 > assert self.x != 42 E assert 42 != 42 - E + where 42 = .x + E + where 42 = .x failure_demo.py:202: AssertionError _______________________ TestMoreErrors.test_compare ________________________ - self = + self = def test_compare(self): > assert globf(10) < 5 @@ -534,7 +534,7 @@ get on the terminal - we are working on that): failure_demo.py:205: AssertionError _____________________ TestMoreErrors.test_try_finally ______________________ - self = + self = def test_try_finally(self): x = 1 @@ -543,4 +543,4 @@ get on the terminal - we are working on that): E assert 1 == 0 failure_demo.py:210: AssertionError - ======================== 39 failed in 0.21 seconds ========================= + ======================== 39 failed in 0.26 seconds ========================= diff --git a/doc/en/example/simple.txt b/doc/en/example/simple.txt index 084855daa..0bee63245 100644 --- a/doc/en/example/simple.txt +++ b/doc/en/example/simple.txt @@ -55,6 +55,7 @@ Let's run this without supplying our new option:: test_sample.py:6: AssertionError ----------------------------- Captured stdout ------------------------------ first + 1 failed in 0.01 seconds And now with supplying a command line option:: @@ -76,6 +77,7 @@ And now with supplying a command line option:: test_sample.py:6: AssertionError ----------------------------- Captured stdout ------------------------------ second + 1 failed in 0.01 seconds You can see that the command line option arrived in our test. This completes the basic pattern. However, one often rather wants to process @@ -106,7 +108,7 @@ directory with the above conftest.py:: $ py.test =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 0 items ============================= in 0.00 seconds ============================= @@ -150,12 +152,12 @@ and when running it will see a skipped "slow" test:: $ py.test -rs # "-rs" means report details on the little 's' =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 2 items test_module.py .s ========================= short test summary info ========================== - SKIP [1] /tmp/doc-exec-278/conftest.py:9: need --runslow option to run + SKIP [1] /tmp/doc-exec-603/conftest.py:9: need --runslow option to run =================== 1 passed, 1 skipped in 0.01 seconds ==================== @@ -163,7 +165,7 @@ Or run it including the ``slow`` marked test:: $ py.test --runslow =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 2 items test_module.py .. @@ -206,6 +208,7 @@ Let's run our little function:: E Failed: not configured: 42 test_checkconfig.py:8: Failed + 1 failed in 0.01 seconds Detect if running from within a py.test run -------------------------------------------------------------- @@ -253,7 +256,7 @@ which will add the string to the test header accordingly:: $ py.test =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 project deps: mylib-1.1 collected 0 items @@ -276,7 +279,7 @@ which will add info only when run with "--v":: $ py.test -v =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 -- /home/hpk/p/pytest/.tox/regen/bin/python + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python info1: did you know that ... did you? collecting ... collected 0 items @@ -287,7 +290,7 @@ and nothing when run plainly:: $ py.test =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 0 items ============================= in 0.00 seconds ============================= @@ -319,7 +322,7 @@ Now we can profile which test functions execute the slowest:: $ py.test --durations=3 =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 3 items test_some_are_slow.py ... @@ -380,7 +383,7 @@ If we run this:: $ py.test -rx =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 4 items test_step.py .Fx. @@ -388,7 +391,7 @@ If we run this:: ================================= FAILURES ================================= ____________________ TestUserHandling.test_modification ____________________ - self = + self = def test_modification(self): > assert 0 @@ -398,7 +401,7 @@ If we run this:: ========================= short test summary info ========================== XFAIL test_step.py::TestUserHandling::()::test_deletion reason: previous test failed (test_modification) - ============== 1 failed, 2 passed, 1 xfailed in 0.01 seconds =============== + ============== 1 failed, 2 passed, 1 xfailed in 0.02 seconds =============== We'll see that ``test_deletion`` was not executed because ``test_modification`` failed. It is reported as an "expected failure". @@ -450,7 +453,7 @@ We can run this:: $ py.test =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 7 items test_step.py .Fx. @@ -460,17 +463,17 @@ We can run this:: ================================== ERRORS ================================== _______________________ ERROR at setup of test_root ________________________ - file /tmp/doc-exec-278/b/test_error.py, line 1 + file /tmp/doc-exec-603/b/test_error.py, line 1 def test_root(db): # no db here, will error out fixture 'db' not found available fixtures: pytestconfig, recwarn, monkeypatch, capfd, capsys, tmpdir use 'py.test --fixtures [testpath]' for help on them. - /tmp/doc-exec-278/b/test_error.py:1 + /tmp/doc-exec-603/b/test_error.py:1 ================================= FAILURES ================================= ____________________ TestUserHandling.test_modification ____________________ - self = + self = def test_modification(self): > assert 0 @@ -479,20 +482,20 @@ We can run this:: test_step.py:9: AssertionError _________________________________ test_a1 __________________________________ - db = + db = def test_a1(db): > assert 0, db # to show value - E AssertionError: + E AssertionError: a/test_db.py:2: AssertionError _________________________________ test_a2 __________________________________ - db = + db = def test_a2(db): > assert 0, db # to show value - E AssertionError: + E AssertionError: a/test_db2.py:2: AssertionError ========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.03 seconds ========== @@ -550,7 +553,7 @@ and run them:: $ py.test test_module.py =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 2 items test_module.py FF @@ -558,7 +561,7 @@ and run them:: ================================= FAILURES ================================= ________________________________ test_fail1 ________________________________ - tmpdir = local('/tmp/pytest-326/test_fail10') + tmpdir = local('/tmp/pytest-190/test_fail10') def test_fail1(tmpdir): > assert 0 @@ -572,12 +575,12 @@ and run them:: E assert 0 test_module.py:4: AssertionError - ========================= 2 failed in 0.02 seconds ========================= + ========================= 2 failed in 0.01 seconds ========================= you will have a "failures" file which contains the failing test ids:: $ cat failures - test_module.py::test_fail1 (/tmp/pytest-326/test_fail10) + test_module.py::test_fail1 (/tmp/pytest-190/test_fail10) test_module.py::test_fail2 Making test result information available in fixtures @@ -640,10 +643,12 @@ and run it:: $ py.test -s test_module.py =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 3 items - test_module.py EFF + test_module.py Esetting up a test failed! test_module.py::test_setup_fails + Fexecuting test failed test_module.py::test_call_fails + F ================================== ERRORS ================================== ____________________ ERROR at setup of test_setup_fails ____________________ @@ -671,9 +676,7 @@ and run it:: E assert 0 test_module.py:15: AssertionError - ==================== 2 failed, 1 error in 0.01 seconds ===================== - setting up a test failed! test_module.py::test_setup_fails - executing test failed test_module.py::test_call_fails + ==================== 2 failed, 1 error in 0.02 seconds ===================== You'll see that the fixture finalizers could use the precise reporting information. diff --git a/doc/en/example/special.txt b/doc/en/example/special.txt index 28051ce23..76d6b29e5 100644 --- a/doc/en/example/special.txt +++ b/doc/en/example/special.txt @@ -61,12 +61,13 @@ will be called ahead of running any tests:: If you run this without output capturing:: $ py.test -q -s test_module.py - .... callattr_ahead_of_alltests called callme called! callme other called SomeTest callme called test_method1 called - test_method1 called - test other - test_unit1 method called + .test_method1 called + .test other + .test_unit1 method called + . + 4 passed in 0.02 seconds diff --git a/doc/en/fixture.txt b/doc/en/fixture.txt index 7aed5973b..7b52990e3 100644 --- a/doc/en/fixture.txt +++ b/doc/en/fixture.txt @@ -76,8 +76,7 @@ marked ``smtp`` fixture function. Running the test looks like this:: $ py.test test_smtpsimple.py =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.4.0.dev12 - plugins: xdist, pep8, cov, cache, capturelog, instafail + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 1 items test_smtpsimple.py F @@ -85,7 +84,7 @@ marked ``smtp`` fixture function. Running the test looks like this:: ================================= FAILURES ================================= ________________________________ test_ehlo _________________________________ - smtp = + smtp = def test_ehlo(smtp): response, msg = smtp.ehlo() @@ -95,7 +94,7 @@ marked ``smtp`` fixture function. Running the test looks like this:: E assert 0 test_smtpsimple.py:12: AssertionError - ========================= 1 failed in 0.17 seconds ========================= + ========================= 1 failed in 0.18 seconds ========================= In the failure traceback we see that the test function was called with a ``smtp`` argument, the ``smtplib.SMTP()`` instance created by the fixture @@ -195,8 +194,7 @@ inspect what is going on and can now run the tests:: $ py.test test_module.py =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.4.0.dev12 - plugins: xdist, pep8, cov, cache, capturelog, instafail + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 2 items test_module.py FF @@ -204,7 +202,7 @@ inspect what is going on and can now run the tests:: ================================= FAILURES ================================= ________________________________ test_ehlo _________________________________ - smtp = + smtp = def test_ehlo(smtp): response = smtp.ehlo() @@ -216,7 +214,7 @@ inspect what is going on and can now run the tests:: test_module.py:6: AssertionError ________________________________ test_noop _________________________________ - smtp = + smtp = def test_noop(smtp): response = smtp.noop() @@ -225,7 +223,7 @@ inspect what is going on and can now run the tests:: E assert 0 test_module.py:11: AssertionError - ========================= 2 failed in 0.18 seconds ========================= + ========================= 2 failed in 0.16 seconds ========================= You see the two ``assert 0`` failing and more importantly you can also see that the same (module-scoped) ``smtp`` object was passed into the two @@ -271,8 +269,9 @@ the fixture in the module has finished execution. Let's execute it:: $ py.test -s -q --tb=no - FF - 2 failed in 0.20 seconds + FFteardown smtp + + 2 failed in 0.15 seconds We see that the ``smtp`` instance is finalized after the two tests finished execution. Note that if we decorated our fixture @@ -313,7 +312,7 @@ again, nothing much has changed:: $ py.test -s -q --tb=no FF - 2 failed in 0.18 seconds + 2 failed in 0.16 seconds Let's quickly create another test module that actually sets the server URL in its module namespace:: @@ -380,7 +379,7 @@ So let's just do another run:: ================================= FAILURES ================================= __________________________ test_ehlo[merlinux.eu] __________________________ - smtp = + smtp = def test_ehlo(smtp): response = smtp.ehlo() @@ -392,7 +391,7 @@ So let's just do another run:: test_module.py:6: AssertionError __________________________ test_noop[merlinux.eu] __________________________ - smtp = + smtp = def test_noop(smtp): response = smtp.noop() @@ -403,7 +402,7 @@ So let's just do another run:: test_module.py:11: AssertionError ________________________ test_ehlo[mail.python.org] ________________________ - smtp = + smtp = def test_ehlo(smtp): response = smtp.ehlo() @@ -414,7 +413,7 @@ So let's just do another run:: test_module.py:5: AssertionError ________________________ test_noop[mail.python.org] ________________________ - smtp = + smtp = def test_noop(smtp): response = smtp.noop() @@ -423,7 +422,7 @@ So let's just do another run:: E assert 0 test_module.py:11: AssertionError - 4 failed in 6.47 seconds + 4 failed in 6.32 seconds We see that our two test functions each ran twice, against the different ``smtp`` instances. Note also, that with the ``mail.python.org`` @@ -463,15 +462,13 @@ Here we declare an ``app`` fixture which receives the previously defined $ py.test -v test_appsetup.py =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.4.0.dev12 -- /home/hpk/venv/0/bin/python - cachedir: /tmp/doc-exec-127/.cache - plugins: xdist, pep8, cov, cache, capturelog, instafail + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python collecting ... collected 2 items test_appsetup.py:12: test_smtp_exists[mail.python.org] PASSED test_appsetup.py:12: test_smtp_exists[merlinux.eu] PASSED - ========================= 2 passed in 6.07 seconds ========================= + ========================= 2 passed in 5.75 seconds ========================= Due to the parametrization of ``smtp`` the test will run twice with two different ``App`` instances and respective smtp servers. There is no @@ -529,9 +526,7 @@ Let's run the tests in verbose mode and with looking at the print-output:: $ py.test -v -s test_module.py =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.4.0.dev12 -- /home/hpk/venv/0/bin/python - cachedir: /tmp/doc-exec-127/.cache - plugins: xdist, pep8, cov, cache, capturelog, instafail + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python collecting ... collected 8 items test_module.py:15: test_0[1] test0 1 @@ -553,7 +548,7 @@ Let's run the tests in verbose mode and with looking at the print-output:: test_module.py:19: test_2[2-mod2] test2 2 mod2 PASSED - ========================= 8 passed in 0.02 seconds ========================= + ========================= 8 passed in 0.01 seconds ========================= You can see that the parametrized module-scoped ``modarg`` resource caused an ordering of test execution that lead to the fewest possible "active" resources. The finalizer for the ``mod1`` parametrized resource was executed @@ -609,7 +604,7 @@ to verify our fixture is activated and the tests pass:: $ py.test -q .. - 2 passed in 0.02 seconds + 2 passed in 0.01 seconds You can specify multiple fixtures like this:: @@ -680,7 +675,7 @@ If we run it, we get two passing tests:: $ py.test -q .. - 2 passed in 0.02 seconds + 2 passed in 0.01 seconds Here is how autouse fixtures work in other scopes: diff --git a/doc/en/getting-started.txt b/doc/en/getting-started.txt index 83ce562bd..dc99ff5d3 100644 --- a/doc/en/getting-started.txt +++ b/doc/en/getting-started.txt @@ -23,7 +23,7 @@ Installation options:: To check your installation has installed the correct version:: $ py.test --version - This is py.test version 2.3.5, imported from /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/pytest.py + This is py.test version 2.4.2, imported from /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/pytest.pyc If you get an error checkout :ref:`installation issues`. @@ -45,7 +45,7 @@ That's it. You can execute the test function now:: $ py.test =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 1 items test_sample.py F @@ -93,6 +93,7 @@ Running it with, this time in "quiet" reporting mode:: $ py.test -q test_sysexit.py . + 1 passed in 0.01 seconds .. todo:: For further ways to assert exceptions see the `raises` @@ -122,7 +123,7 @@ run the module by passing its filename:: ================================= FAILURES ================================= ____________________________ TestClass.test_two ____________________________ - self = + self = def test_two(self): x = "hello" @@ -130,6 +131,7 @@ run the module by passing its filename:: E assert hasattr('hello', 'check') test_class.py:8: AssertionError + 1 failed, 1 passed in 0.01 seconds The first test passed, the second failed. Again we can easily see the intermediate values used in the assertion, helping us to @@ -157,7 +159,7 @@ before performing the test function call. Let's just run it:: ================================= FAILURES ================================= _____________________________ test_needsfiles ______________________________ - tmpdir = local('/tmp/pytest-322/test_needsfiles0') + tmpdir = local('/tmp/pytest-186/test_needsfiles0') def test_needsfiles(tmpdir): print tmpdir @@ -166,7 +168,8 @@ before performing the test function call. Let's just run it:: test_tmpdir.py:3: AssertionError ----------------------------- Captured stdout ------------------------------ - /tmp/pytest-322/test_needsfiles0 + /tmp/pytest-186/test_needsfiles0 + 1 failed in 0.01 seconds Before the test runs, a unique-per-test-invocation temporary directory was created. More info at :ref:`tmpdir handling`. diff --git a/doc/en/parametrize.txt b/doc/en/parametrize.txt index 5c326d0b2..63cc2ed0f 100644 --- a/doc/en/parametrize.txt +++ b/doc/en/parametrize.txt @@ -52,15 +52,14 @@ tuples so that that the ``test_eval`` function will run three times using them in turn:: $ py.test - ============================= test session starts ============================== - platform linux2 -- Python 2.7.3 -- pytest-2.4.0.dev3 - plugins: xdist, cache, cli, pep8, xprocess, cov, capturelog, bdd-splinter, rerunfailures, instafail, localserver + =========================== test session starts ============================ + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 3 items test_expectation.py ..F - =================================== FAILURES =================================== - ______________________________ test_eval[6*9-42] _______________________________ + ================================= FAILURES ================================= + ____________________________ test_eval[6*9-42] _____________________________ input = '6*9', expected = 42 @@ -75,7 +74,7 @@ them in turn:: E + where 54 = eval('6*9') test_expectation.py:8: AssertionError - ====================== 1 failed, 2 passed in 0.02 seconds ====================== + ==================== 1 failed, 2 passed in 0.01 seconds ==================== As designed in this example, only one pair of input/output values fails the simple test function. And as usual with test function arguments, @@ -100,14 +99,13 @@ for example with the builtin ``mark.xfail``:: Let's run this:: $ py.test - ============================= test session starts ============================== - platform linux2 -- Python 2.7.3 -- pytest-2.4.0.dev3 - plugins: xdist, cache, cli, pep8, xprocess, cov, capturelog, bdd-splinter, rerunfailures, instafail, localserver + =========================== test session starts ============================ + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 3 items test_expectation.py ..x - ===================== 2 passed, 1 xfailed in 0.02 seconds ====================== + =================== 2 passed, 1 xfailed in 0.01 seconds ==================== The one parameter set which caused a failure previously now shows up as an "xfailed (expected to fail)" test. @@ -159,22 +157,24 @@ If we now pass two stringinput values, our test will run twice:: $ py.test -q --stringinput="hello" --stringinput="world" test_strings.py .. + 2 passed in 0.01 seconds Let's also run with a stringinput that will lead to a failing test:: $ py.test -q --stringinput="!" test_strings.py F - =================================== FAILURES =================================== - _____________________________ test_valid_string[!] _____________________________ + ================================= FAILURES ================================= + ___________________________ test_valid_string[!] ___________________________ stringinput = '!' def test_valid_string(stringinput): > assert stringinput.isalpha() - E assert () - E + where = '!'.isalpha + E assert () + E + where = '!'.isalpha test_strings.py:3: AssertionError + 1 failed in 0.01 seconds As expected our test function fails. @@ -184,8 +184,9 @@ listlist:: $ py.test -q -rs test_strings.py s - =========================== short test summary info ============================ - SKIP [1] /home/hpk/p/pytest/_pytest/python.py:999: got empty parameter set, function test_valid_string at /tmp/doc-exec-2/test_strings.py:1 + ========================= short test summary info ========================== + SKIP [1] /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:1024: got empty parameter set, function test_valid_string at /tmp/doc-exec-561/test_strings.py:1 + 1 skipped in 0.01 seconds For further examples, you might want to look at :ref:`more parametrization examples `. diff --git a/doc/en/skipping.txt b/doc/en/skipping.txt index 8ca2188a1..615da76fe 100644 --- a/doc/en/skipping.txt +++ b/doc/en/skipping.txt @@ -158,14 +158,14 @@ Running it with the report-on-xfail option gives this output:: example $ py.test -rx xfail_demo.py =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 6 items - + xfail_demo.py xxxxxx ========================= short test summary info ========================== XFAIL xfail_demo.py::test_hello XFAIL xfail_demo.py::test_hello2 - reason: [NOTRUN] + reason: [NOTRUN] XFAIL xfail_demo.py::test_hello3 condition: hasattr(os, 'sep') XFAIL xfail_demo.py::test_hello4 @@ -174,7 +174,7 @@ Running it with the report-on-xfail option gives this output:: condition: pytest.__version__[0] != "17" XFAIL xfail_demo.py::test_hello6 reason: reason - + ======================== 6 xfailed in 0.05 seconds ========================= .. _`skip/xfail with parametrize`: diff --git a/doc/en/tmpdir.txt b/doc/en/tmpdir.txt index 531b3433a..13382e241 100644 --- a/doc/en/tmpdir.txt +++ b/doc/en/tmpdir.txt @@ -29,7 +29,7 @@ Running this would result in a passed test except for the last $ py.test test_tmpdir.py =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 1 items test_tmpdir.py F @@ -37,7 +37,7 @@ Running this would result in a passed test except for the last ================================= FAILURES ================================= _____________________________ test_create_file _____________________________ - tmpdir = local('/tmp/pytest-323/test_create_file0') + tmpdir = local('/tmp/pytest-187/test_create_file0') def test_create_file(tmpdir): p = tmpdir.mkdir("sub").join("hello.txt") @@ -48,7 +48,7 @@ Running this would result in a passed test except for the last E assert 0 test_tmpdir.py:7: AssertionError - ========================= 1 failed in 0.02 seconds ========================= + ========================= 1 failed in 0.01 seconds ========================= .. _`base temporary directory`: diff --git a/doc/en/unittest.txt b/doc/en/unittest.txt index 9184584fd..80e7c27e0 100644 --- a/doc/en/unittest.txt +++ b/doc/en/unittest.txt @@ -88,7 +88,7 @@ the ``self.db`` values in the traceback:: $ py.test test_unittest_db.py =========================== test session starts ============================ - platform linux2 -- Python 2.7.3 -- pytest-2.3.5 + platform linux2 -- Python 2.7.3 -- pytest-2.4.2 collected 2 items test_unittest_db.py FF @@ -101,7 +101,7 @@ the ``self.db`` values in the traceback:: def test_method1(self): assert hasattr(self, "db") > assert 0, self.db # fail for demo purposes - E AssertionError: + E AssertionError: test_unittest_db.py:9: AssertionError ___________________________ MyTest.test_method2 ____________________________ @@ -110,7 +110,7 @@ the ``self.db`` values in the traceback:: def test_method2(self): > assert 0, self.db # fail for demo purposes - E AssertionError: + E AssertionError: test_unittest_db.py:12: AssertionError ========================= 2 failed in 0.02 seconds ========================= @@ -160,6 +160,7 @@ Running this test module ...:: $ py.test -q test_unittest_cleandir.py . + 1 passed in 0.02 seconds ... gives us one passed test because the ``initdir`` fixture function was executed ahead of the ``test_method``. diff --git a/doc/en/usage.txt b/doc/en/usage.txt index f068db403..946af55c3 100644 --- a/doc/en/usage.txt +++ b/doc/en/usage.txt @@ -188,7 +188,7 @@ Running it will show that ``MyPlugin`` was added and its hook was invoked:: $ python myinvoke.py - *** test run reporting finishing + .. include:: links.inc