Regendoc after more fixes on features branch
This commit is contained in:
parent
6e3105dc8f
commit
701d5fc727
|
@ -246,7 +246,8 @@ the conftest file::
|
||||||
f1 = Foo(1)
|
f1 = Foo(1)
|
||||||
f2 = Foo(2)
|
f2 = Foo(2)
|
||||||
> assert f1 == f2
|
> assert f1 == f2
|
||||||
E AssertionError
|
E assert Comparing Foo instances:
|
||||||
|
E vals: 1 != 2
|
||||||
|
|
||||||
test_foocompare.py:11: AssertionError
|
test_foocompare.py:11: AssertionError
|
||||||
1 failed in 0.12 seconds
|
1 failed in 0.12 seconds
|
||||||
|
|
|
@ -110,6 +110,7 @@ If you then run it with ``--lf``::
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
|
======= 48 tests deselected ========
|
||||||
======= 2 failed, 48 deselected in 0.12 seconds ========
|
======= 2 failed, 48 deselected in 0.12 seconds ========
|
||||||
|
|
||||||
You have run only the two failing test from the last run, while 48 tests have
|
You have run only the two failing test from the last run, while 48 tests have
|
||||||
|
|
|
@ -38,7 +38,7 @@ You can then restrict a test run to only run tests marked with ``webtest``::
|
||||||
|
|
||||||
test_server.py::test_send_http PASSED
|
test_server.py::test_send_http PASSED
|
||||||
|
|
||||||
======= 3 tests deselected by "-m 'webtest'" ========
|
======= 3 tests deselected ========
|
||||||
======= 1 passed, 3 deselected in 0.12 seconds ========
|
======= 1 passed, 3 deselected in 0.12 seconds ========
|
||||||
|
|
||||||
Or the inverse, running all tests except the webtest ones::
|
Or the inverse, running all tests except the webtest ones::
|
||||||
|
@ -54,7 +54,7 @@ Or the inverse, running all tests except the webtest ones::
|
||||||
test_server.py::test_another PASSED
|
test_server.py::test_another PASSED
|
||||||
test_server.py::TestClass::test_method PASSED
|
test_server.py::TestClass::test_method PASSED
|
||||||
|
|
||||||
======= 1 tests deselected by "-m 'not webtest'" ========
|
======= 1 tests deselected ========
|
||||||
======= 3 passed, 1 deselected in 0.12 seconds ========
|
======= 3 passed, 1 deselected in 0.12 seconds ========
|
||||||
|
|
||||||
Selecting tests based on their node ID
|
Selecting tests based on their node ID
|
||||||
|
@ -137,7 +137,7 @@ select tests based on their names::
|
||||||
|
|
||||||
test_server.py::test_send_http PASSED
|
test_server.py::test_send_http PASSED
|
||||||
|
|
||||||
======= 3 tests deselected by '-khttp' ========
|
======= 3 tests deselected ========
|
||||||
======= 1 passed, 3 deselected in 0.12 seconds ========
|
======= 1 passed, 3 deselected in 0.12 seconds ========
|
||||||
|
|
||||||
And you can also run all tests except the ones that match the keyword::
|
And you can also run all tests except the ones that match the keyword::
|
||||||
|
@ -153,7 +153,7 @@ And you can also run all tests except the ones that match the keyword::
|
||||||
test_server.py::test_another PASSED
|
test_server.py::test_another PASSED
|
||||||
test_server.py::TestClass::test_method PASSED
|
test_server.py::TestClass::test_method PASSED
|
||||||
|
|
||||||
======= 1 tests deselected by '-knot send_http' ========
|
======= 1 tests deselected ========
|
||||||
======= 3 passed, 1 deselected in 0.12 seconds ========
|
======= 3 passed, 1 deselected in 0.12 seconds ========
|
||||||
|
|
||||||
Or to select "http" and "quick" tests::
|
Or to select "http" and "quick" tests::
|
||||||
|
@ -168,7 +168,7 @@ Or to select "http" and "quick" tests::
|
||||||
test_server.py::test_send_http PASSED
|
test_server.py::test_send_http PASSED
|
||||||
test_server.py::test_something_quick PASSED
|
test_server.py::test_something_quick PASSED
|
||||||
|
|
||||||
======= 2 tests deselected by '-khttp or quick' ========
|
======= 2 tests deselected ========
|
||||||
======= 2 passed, 2 deselected in 0.12 seconds ========
|
======= 2 passed, 2 deselected in 0.12 seconds ========
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
@ -505,7 +505,7 @@ Note that if you specify a platform via the marker-command line option like this
|
||||||
|
|
||||||
test_plat.py s
|
test_plat.py s
|
||||||
|
|
||||||
======= 3 tests deselected by "-m 'linux2'" ========
|
======= 3 tests deselected ========
|
||||||
======= 1 skipped, 3 deselected in 0.12 seconds ========
|
======= 1 skipped, 3 deselected in 0.12 seconds ========
|
||||||
|
|
||||||
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
|
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
|
||||||
|
@ -566,7 +566,7 @@ We can now use the ``-m option`` to select one set::
|
||||||
test_module.py:6: in test_interface_complex
|
test_module.py:6: in test_interface_complex
|
||||||
assert 0
|
assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
======= 2 tests deselected by "-m 'interface'" ========
|
======= 2 tests deselected ========
|
||||||
======= 2 failed, 2 deselected in 0.12 seconds ========
|
======= 2 failed, 2 deselected in 0.12 seconds ========
|
||||||
|
|
||||||
or to select both "event" and "interface" tests::
|
or to select both "event" and "interface" tests::
|
||||||
|
@ -592,5 +592,5 @@ or to select both "event" and "interface" tests::
|
||||||
test_module.py:9: in test_event_simple
|
test_module.py:9: in test_event_simple
|
||||||
assert 0
|
assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
======= 1 tests deselected by "-m 'interface or event'" ========
|
======= 1 tests deselected ========
|
||||||
======= 3 failed, 1 deselected in 0.12 seconds ========
|
======= 3 failed, 1 deselected in 0.12 seconds ========
|
||||||
|
|
|
@ -369,7 +369,7 @@ argument sets to use for each test function. Let's run it::
|
||||||
$ pytest -q
|
$ pytest -q
|
||||||
F..
|
F..
|
||||||
======= FAILURES ========
|
======= FAILURES ========
|
||||||
_______ TestClass.test_equals[2-1] ________
|
_______ TestClass.test_equals[1-2] ________
|
||||||
|
|
||||||
self = <test_parametrize.TestClass object at 0xdeadbeef>, a = 1, b = 2
|
self = <test_parametrize.TestClass object at 0xdeadbeef>, a = 1, b = 2
|
||||||
|
|
||||||
|
@ -399,8 +399,8 @@ Running it results in some skips if we don't have all the python interpreters in
|
||||||
. $ pytest -rs -q multipython.py
|
. $ pytest -rs -q multipython.py
|
||||||
ssssssssssss...ssssssssssss
|
ssssssssssss...ssssssssssss
|
||||||
======= short test summary info ========
|
======= short test summary info ========
|
||||||
SKIP [12] $REGENDOC_TMPDIR/CWD/multipython.py:23: 'python3.3' not found
|
|
||||||
SKIP [12] $REGENDOC_TMPDIR/CWD/multipython.py:23: 'python2.6' not found
|
SKIP [12] $REGENDOC_TMPDIR/CWD/multipython.py:23: 'python2.6' not found
|
||||||
|
SKIP [12] $REGENDOC_TMPDIR/CWD/multipython.py:23: 'python3.3' not found
|
||||||
3 passed, 24 skipped in 0.12 seconds
|
3 passed, 24 skipped in 0.12 seconds
|
||||||
|
|
||||||
Indirect parametrization of optional implementations/imports
|
Indirect parametrization of optional implementations/imports
|
||||||
|
|
|
@ -501,7 +501,7 @@ We can run this::
|
||||||
file $REGENDOC_TMPDIR/b/test_error.py, line 1
|
file $REGENDOC_TMPDIR/b/test_error.py, line 1
|
||||||
def test_root(db): # no db here, will error out
|
def test_root(db): # no db here, will error out
|
||||||
E fixture 'db' not found
|
E fixture 'db' not found
|
||||||
available fixtures: monkeypatch, capfd, recwarn, pytestconfig, tmpdir_factory, tmpdir, cache, capsys, record_xml_property, doctest_namespace
|
available fixtures: cache, capfd, capsys, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory
|
||||||
use 'pytest --fixtures [testpath]' for help on them.
|
use 'pytest --fixtures [testpath]' for help on them.
|
||||||
|
|
||||||
$REGENDOC_TMPDIR/b/test_error.py:1
|
$REGENDOC_TMPDIR/b/test_error.py:1
|
||||||
|
|
Loading…
Reference in New Issue