v2 of resources API draft
This commit is contained in:
parent
7a90bed19b
commit
dbaf7ee9d0
|
@ -24,7 +24,8 @@ you will see the return value of the function call::
|
||||||
|
|
||||||
$ py.test test_assert1.py
|
$ py.test test_assert1.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 1 items
|
collecting ... collected 1 items
|
||||||
|
|
||||||
test_assert1.py F
|
test_assert1.py F
|
||||||
|
@ -38,7 +39,7 @@ you will see the return value of the function call::
|
||||||
E + where 3 = f()
|
E + where 3 = f()
|
||||||
|
|
||||||
test_assert1.py:5: AssertionError
|
test_assert1.py:5: AssertionError
|
||||||
========================= 1 failed in 0.01 seconds =========================
|
========================= 1 failed in 0.02 seconds =========================
|
||||||
|
|
||||||
py.test has support for showing the values of the most common subexpressions
|
py.test has support for showing the values of the most common subexpressions
|
||||||
including calls, attributes, comparisons, and binary and unary
|
including calls, attributes, comparisons, and binary and unary
|
||||||
|
@ -106,7 +107,8 @@ if you run this module::
|
||||||
|
|
||||||
$ py.test test_assert2.py
|
$ py.test test_assert2.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 1 items
|
collecting ... collected 1 items
|
||||||
|
|
||||||
test_assert2.py F
|
test_assert2.py F
|
||||||
|
@ -125,7 +127,7 @@ if you run this module::
|
||||||
E '5'
|
E '5'
|
||||||
|
|
||||||
test_assert2.py:5: AssertionError
|
test_assert2.py:5: AssertionError
|
||||||
========================= 1 failed in 0.01 seconds =========================
|
========================= 1 failed in 0.02 seconds =========================
|
||||||
|
|
||||||
Special comparisons are done for a number of cases:
|
Special comparisons are done for a number of cases:
|
||||||
|
|
||||||
|
@ -182,7 +184,7 @@ the conftest file::
|
||||||
E vals: 1 != 2
|
E vals: 1 != 2
|
||||||
|
|
||||||
test_foocompare.py:8: AssertionError
|
test_foocompare.py:8: AssertionError
|
||||||
1 failed in 0.01 seconds
|
1 failed in 0.02 seconds
|
||||||
|
|
||||||
.. _assert-details:
|
.. _assert-details:
|
||||||
.. _`assert introspection`:
|
.. _`assert introspection`:
|
||||||
|
|
|
@ -28,7 +28,8 @@ You can ask for available builtin or project-custom
|
||||||
|
|
||||||
$ py.test --funcargs
|
$ py.test --funcargs
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collected 0 items
|
collected 0 items
|
||||||
pytestconfig
|
pytestconfig
|
||||||
the pytest config object with access to command line opts.
|
the pytest config object with access to command line opts.
|
||||||
|
@ -76,5 +77,7 @@ You can ask for available builtin or project-custom
|
||||||
See http://docs.python.org/library/warnings.html for information
|
See http://docs.python.org/library/warnings.html for information
|
||||||
on warning categories.
|
on warning categories.
|
||||||
|
|
||||||
|
cov
|
||||||
|
A pytest funcarg that provides access to the underlying coverage object.
|
||||||
|
|
||||||
============================= in 0.00 seconds =============================
|
============================= in 0.01 seconds =============================
|
||||||
|
|
|
@ -64,7 +64,8 @@ of the failing function and hide the other one::
|
||||||
|
|
||||||
$ py.test
|
$ py.test
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
|
|
||||||
test_module.py .F
|
test_module.py .F
|
||||||
|
@ -78,8 +79,8 @@ of the failing function and hide the other one::
|
||||||
|
|
||||||
test_module.py:9: AssertionError
|
test_module.py:9: AssertionError
|
||||||
----------------------------- Captured stdout ------------------------------
|
----------------------------- Captured stdout ------------------------------
|
||||||
setting up <function test_func2 at 0x20160c8>
|
setting up <function test_func2 at 0x228faa0>
|
||||||
==================== 1 failed, 1 passed in 0.01 seconds ====================
|
==================== 1 failed, 1 passed in 0.02 seconds ====================
|
||||||
|
|
||||||
Accessing captured output from a test function
|
Accessing captured output from a test function
|
||||||
---------------------------------------------------
|
---------------------------------------------------
|
||||||
|
|
|
@ -17,7 +17,7 @@
|
||||||
#
|
#
|
||||||
# The full version, including alpha/beta/rc tags.
|
# The full version, including alpha/beta/rc tags.
|
||||||
# The short X.Y version.
|
# The short X.Y version.
|
||||||
version = release = "2.3.0.dev1"
|
version = release = "2.3.0.dev5"
|
||||||
|
|
||||||
import sys, os
|
import sys, os
|
||||||
|
|
||||||
|
|
|
@ -23,4 +23,5 @@ Full pytest documentation
|
||||||
:hidden:
|
:hidden:
|
||||||
|
|
||||||
changelog.txt
|
changelog.txt
|
||||||
|
examples/resources.txt
|
||||||
|
|
||||||
|
|
|
@ -44,9 +44,10 @@ then you can just invoke ``py.test`` without command line options::
|
||||||
|
|
||||||
$ py.test
|
$ py.test
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 1 items
|
collecting ... collected 1 items
|
||||||
|
|
||||||
mymodule.py .
|
mymodule.py .
|
||||||
|
|
||||||
========================= 1 passed in 0.02 seconds =========================
|
========================= 1 passed in 0.07 seconds =========================
|
||||||
|
|
|
@ -26,25 +26,29 @@ You can then restrict a test run to only run tests marked with ``webtest``::
|
||||||
|
|
||||||
$ py.test -v -m webtest
|
$ py.test -v -m webtest
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.3 -- pytest-2.2.5.dev1 -- /home/hpk/venv/1/bin/python
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2 -- /home/hpk/venv/1/bin/python
|
||||||
|
cachedir: /home/hpk/tmp/doc-exec-305/.cache
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
|
|
||||||
test_server.py:3: test_send_http PASSED
|
test_server.py:3: test_send_http PASSED
|
||||||
|
|
||||||
=================== 1 tests deselected by "-m 'webtest'" ===================
|
=================== 1 tests deselected by "-m 'webtest'" ===================
|
||||||
================== 1 passed, 1 deselected in 0.00 seconds ==================
|
================== 1 passed, 1 deselected in 0.02 seconds ==================
|
||||||
|
|
||||||
Or the inverse, running all tests except the webtest ones::
|
Or the inverse, running all tests except the webtest ones::
|
||||||
|
|
||||||
$ py.test -v -m "not webtest"
|
$ py.test -v -m "not webtest"
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.3 -- pytest-2.2.5.dev1 -- /home/hpk/venv/1/bin/python
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2 -- /home/hpk/venv/1/bin/python
|
||||||
|
cachedir: /home/hpk/tmp/doc-exec-305/.cache
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
|
|
||||||
test_server.py:6: test_something_quick PASSED
|
test_server.py:6: test_something_quick PASSED
|
||||||
|
|
||||||
================= 1 tests deselected by "-m 'not webtest'" =================
|
================= 1 tests deselected by "-m 'not webtest'" =================
|
||||||
================== 1 passed, 1 deselected in 0.01 seconds ==================
|
================== 1 passed, 1 deselected in 0.02 seconds ==================
|
||||||
|
|
||||||
Registering markers
|
Registering markers
|
||||||
-------------------------------------
|
-------------------------------------
|
||||||
|
@ -143,38 +147,41 @@ the given argument::
|
||||||
|
|
||||||
$ py.test -k send_http # running with the above defined examples
|
$ py.test -k send_http # running with the above defined examples
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.3 -- pytest-2.2.5.dev1
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 4 items
|
collecting ... collected 4 items
|
||||||
|
|
||||||
test_server.py .
|
test_server.py .
|
||||||
|
|
||||||
=================== 3 tests deselected by '-ksend_http' ====================
|
=================== 3 tests deselected by '-ksend_http' ====================
|
||||||
================== 1 passed, 3 deselected in 0.01 seconds ==================
|
================== 1 passed, 3 deselected in 0.02 seconds ==================
|
||||||
|
|
||||||
And you can also run all tests except the ones that match the keyword::
|
And you can also run all tests except the ones that match the keyword::
|
||||||
|
|
||||||
$ py.test -k-send_http
|
$ py.test -k-send_http
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.3 -- pytest-2.2.5.dev1
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 4 items
|
collecting ... collected 4 items
|
||||||
|
|
||||||
test_mark_classlevel.py ..
|
test_mark_classlevel.py ..
|
||||||
test_server.py .
|
test_server.py .
|
||||||
|
|
||||||
=================== 1 tests deselected by '-k-send_http' ===================
|
=================== 1 tests deselected by '-k-send_http' ===================
|
||||||
================== 3 passed, 1 deselected in 0.01 seconds ==================
|
================== 3 passed, 1 deselected in 0.02 seconds ==================
|
||||||
|
|
||||||
Or to only select the class::
|
Or to only select the class::
|
||||||
|
|
||||||
$ py.test -kTestClass
|
$ py.test -kTestClass
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.3 -- pytest-2.2.5.dev1
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 4 items
|
collecting ... collected 4 items
|
||||||
|
|
||||||
test_mark_classlevel.py ..
|
test_mark_classlevel.py ..
|
||||||
|
|
||||||
=================== 2 tests deselected by '-kTestClass' ====================
|
=================== 2 tests deselected by '-kTestClass' ====================
|
||||||
================== 2 passed, 2 deselected in 0.01 seconds ==================
|
================== 2 passed, 2 deselected in 0.02 seconds ==================
|
||||||
|
|
||||||
.. _`adding a custom marker from a plugin`:
|
.. _`adding a custom marker from a plugin`:
|
||||||
|
|
||||||
|
@ -223,23 +230,25 @@ the test needs::
|
||||||
|
|
||||||
$ py.test -E stage2
|
$ py.test -E stage2
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.3 -- pytest-2.2.5.dev1
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 1 items
|
collecting ... collected 1 items
|
||||||
|
|
||||||
test_someenv.py s
|
test_someenv.py s
|
||||||
|
|
||||||
======================== 1 skipped in 0.01 seconds =========================
|
======================== 1 skipped in 0.02 seconds =========================
|
||||||
|
|
||||||
and here is one that specifies exactly the environment needed::
|
and here is one that specifies exactly the environment needed::
|
||||||
|
|
||||||
$ py.test -E stage1
|
$ py.test -E stage1
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.3 -- pytest-2.2.5.dev1
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 1 items
|
collecting ... collected 1 items
|
||||||
|
|
||||||
test_someenv.py .
|
test_someenv.py .
|
||||||
|
|
||||||
========================= 1 passed in 0.01 seconds =========================
|
========================= 1 passed in 0.02 seconds =========================
|
||||||
|
|
||||||
The ``--markers`` option always gives you a list of available markers::
|
The ``--markers`` option always gives you a list of available markers::
|
||||||
|
|
||||||
|
@ -298,7 +307,7 @@ Let's run this without capturing output and see what we get::
|
||||||
glob args=('class',) kwargs={'x': 2}
|
glob args=('class',) kwargs={'x': 2}
|
||||||
glob args=('module',) kwargs={'x': 1}
|
glob args=('module',) kwargs={'x': 1}
|
||||||
.
|
.
|
||||||
1 passed in 0.01 seconds
|
1 passed in 0.02 seconds
|
||||||
|
|
||||||
marking platform specific tests with pytest
|
marking platform specific tests with pytest
|
||||||
--------------------------------------------------------------
|
--------------------------------------------------------------
|
||||||
|
@ -351,25 +360,27 @@ then you will see two test skipped and two executed tests as expected::
|
||||||
|
|
||||||
$ py.test -rs # this option reports skip reasons
|
$ py.test -rs # this option reports skip reasons
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.3 -- pytest-2.2.5.dev1
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 4 items
|
collecting ... collected 4 items
|
||||||
|
|
||||||
test_plat.py s.s.
|
test_plat.py s.s.
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
SKIP [2] /home/hpk/tmp/doc-exec-222/conftest.py:12: cannot run on platform linux2
|
SKIP [2] /home/hpk/tmp/doc-exec-305/conftest.py:12: cannot run on platform linux2
|
||||||
|
|
||||||
=================== 2 passed, 2 skipped in 0.01 seconds ====================
|
=================== 2 passed, 2 skipped in 0.02 seconds ====================
|
||||||
|
|
||||||
Note that if you specify a platform via the marker-command line option like this::
|
Note that if you specify a platform via the marker-command line option like this::
|
||||||
|
|
||||||
$ py.test -m linux2
|
$ py.test -m linux2
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.3 -- pytest-2.2.5.dev1
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 4 items
|
collecting ... collected 4 items
|
||||||
|
|
||||||
test_plat.py .
|
test_plat.py .
|
||||||
|
|
||||||
=================== 3 tests deselected by "-m 'linux2'" ====================
|
=================== 3 tests deselected by "-m 'linux2'" ====================
|
||||||
================== 1 passed, 3 deselected in 0.01 seconds ==================
|
================== 1 passed, 3 deselected in 0.02 seconds ==================
|
||||||
|
|
||||||
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
|
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
|
||||||
|
|
|
@ -49,7 +49,8 @@ You can now run the test::
|
||||||
|
|
||||||
$ py.test test_sample.py
|
$ py.test test_sample.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 1 items
|
collecting ... collected 1 items
|
||||||
|
|
||||||
test_sample.py F
|
test_sample.py F
|
||||||
|
@ -57,7 +58,7 @@ You can now run the test::
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_answer ________________________________
|
_______________________________ test_answer ________________________________
|
||||||
|
|
||||||
mysetup = <conftest.MySetup instance at 0x17f21b8>
|
mysetup = <conftest.MySetup instance at 0x27e5320>
|
||||||
|
|
||||||
def test_answer(mysetup):
|
def test_answer(mysetup):
|
||||||
app = mysetup.myapp()
|
app = mysetup.myapp()
|
||||||
|
@ -66,7 +67,7 @@ You can now run the test::
|
||||||
E assert 54 == 42
|
E assert 54 == 42
|
||||||
|
|
||||||
test_sample.py:4: AssertionError
|
test_sample.py:4: AssertionError
|
||||||
========================= 1 failed in 0.01 seconds =========================
|
========================= 1 failed in 0.02 seconds =========================
|
||||||
|
|
||||||
This means that our ``mysetup`` object was successfully instantiated
|
This means that our ``mysetup`` object was successfully instantiated
|
||||||
and ``mysetup.app()`` returned an initialized ``MyApp`` instance.
|
and ``mysetup.app()`` returned an initialized ``MyApp`` instance.
|
||||||
|
@ -122,14 +123,15 @@ Running it yields::
|
||||||
|
|
||||||
$ py.test test_ssh.py -rs
|
$ py.test test_ssh.py -rs
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 1 items
|
collecting ... collected 1 items
|
||||||
|
|
||||||
test_ssh.py s
|
test_ssh.py s
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
SKIP [1] /tmp/doc-exec-220/conftest.py:22: specify ssh host with --ssh
|
SKIP [1] /home/hpk/tmp/doc-exec-306/conftest.py:22: specify ssh host with --ssh
|
||||||
|
|
||||||
======================== 1 skipped in 0.01 seconds =========================
|
======================== 1 skipped in 0.02 seconds =========================
|
||||||
|
|
||||||
If you specify a command line option like ``py.test --ssh=python.org`` the test will execute as expected.
|
If you specify a command line option like ``py.test --ssh=python.org`` the test will execute as expected.
|
||||||
|
|
||||||
|
|
|
@ -27,7 +27,8 @@ now execute the test specification::
|
||||||
|
|
||||||
nonpython $ py.test test_simple.yml
|
nonpython $ py.test test_simple.yml
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
|
|
||||||
test_simple.yml .F
|
test_simple.yml .F
|
||||||
|
@ -37,7 +38,7 @@ now execute the test specification::
|
||||||
usecase execution failed
|
usecase execution failed
|
||||||
spec failed: 'some': 'other'
|
spec failed: 'some': 'other'
|
||||||
no further details known at this point.
|
no further details known at this point.
|
||||||
==================== 1 failed, 1 passed in 0.06 seconds ====================
|
==================== 1 failed, 1 passed in 0.11 seconds ====================
|
||||||
|
|
||||||
You get one dot for the passing ``sub1: sub1`` check and one failure.
|
You get one dot for the passing ``sub1: sub1`` check and one failure.
|
||||||
Obviously in the above ``conftest.py`` you'll want to implement a more
|
Obviously in the above ``conftest.py`` you'll want to implement a more
|
||||||
|
@ -56,7 +57,9 @@ consulted when reporting in ``verbose`` mode::
|
||||||
|
|
||||||
nonpython $ py.test -v
|
nonpython $ py.test -v
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4 -- /home/hpk/venv/0/bin/python
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2 -- /home/hpk/venv/1/bin/python
|
||||||
|
cachedir: /home/hpk/p/pytest/doc/en/.cache
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
|
|
||||||
test_simple.yml:1: usecase: ok PASSED
|
test_simple.yml:1: usecase: ok PASSED
|
||||||
|
@ -67,17 +70,18 @@ consulted when reporting in ``verbose`` mode::
|
||||||
usecase execution failed
|
usecase execution failed
|
||||||
spec failed: 'some': 'other'
|
spec failed: 'some': 'other'
|
||||||
no further details known at this point.
|
no further details known at this point.
|
||||||
==================== 1 failed, 1 passed in 0.06 seconds ====================
|
==================== 1 failed, 1 passed in 0.04 seconds ====================
|
||||||
|
|
||||||
While developing your custom test collection and execution it's also
|
While developing your custom test collection and execution it's also
|
||||||
interesting to just look at the collection tree::
|
interesting to just look at the collection tree::
|
||||||
|
|
||||||
nonpython $ py.test --collectonly
|
nonpython $ py.test --collectonly
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
<YamlFile 'test_simple.yml'>
|
<YamlFile 'test_simple.yml'>
|
||||||
<YamlItem 'ok'>
|
<YamlItem 'ok'>
|
||||||
<YamlItem 'hello'>
|
<YamlItem 'hello'>
|
||||||
|
|
||||||
============================= in 0.07 seconds =============================
|
============================= in 0.04 seconds =============================
|
||||||
|
|
|
@ -2,38 +2,65 @@
|
||||||
V2: Creating and working with parametrized test resources
|
V2: Creating and working with parametrized test resources
|
||||||
===============================================================
|
===============================================================
|
||||||
|
|
||||||
# XXX collection versus setup-time
|
pytest-2.X provides generalized resource parametrization, unifying
|
||||||
# XXX parametrize-relation?
|
and extending all existing funcarg and parametrization features of
|
||||||
|
previous pytest versions. Existing test suites and plugins written
|
||||||
|
for previous pytest versions shall run unmodified.
|
||||||
|
|
||||||
pytest-2.3 provides generalized resource management allowing
|
This V2 draft focuses on incorporating feedback provided by Floris Bruynooghe,
|
||||||
to flexibly manage caching and parametrization across your test suite.
|
Carl Meyer and Ronny Pfannschmidt. It remains as draft documentation, pending
|
||||||
|
further refinements and changes according to implementation or backward
|
||||||
|
compatibility issues. The main changes to V1 are:
|
||||||
|
|
||||||
This is draft documentation, pending refinements and changes according
|
* changed API names (atnode -> scopenode)
|
||||||
to feedback and to implementation or backward compatibility issues
|
* register_factory now happens at Node.collect_init() or pytest_collection_init
|
||||||
(the new mechanism is supposed to allow fully backward compatible
|
time. It will raise an Error if called during the runtestloop
|
||||||
operations for uses of the "funcarg" mechanism.
|
(which performs setup/call/teardown for each collected test).
|
||||||
|
* new examples and notes related to @parametrize and metafunc.parametrize()
|
||||||
|
* use 2.X as the version for introduction - not sure if 2.3 or 2.4 will
|
||||||
|
actually bring it.
|
||||||
|
* examples/uses which were previously not possible to implement easily
|
||||||
|
are marked with "NEW" in the title.
|
||||||
|
|
||||||
the new global pytest_runtest_init hook
|
(NEW) the init_collection and init_runtestloop hooks
|
||||||
------------------------------------------------------
|
------------------------------------------------------
|
||||||
|
|
||||||
Prior to 2.3, pytest offered a pytest_configure and a pytest_sessionstart
|
pytest for a long time offers a pytest_configure and a pytest_sessionstart
|
||||||
hook which was used often to setup global resources. This suffers from
|
hook which are often used to setup global resources. This suffers from
|
||||||
several problems. First of all, in distributed testing the master would
|
several problems:
|
||||||
also setup test resources that are never needed because it only co-ordinates
|
|
||||||
the test run activities of the slave processes. Secondly, in large test
|
|
||||||
suites resources are setup that might not be needed for the concrete test
|
|
||||||
run. The first issue is solved through the introduction of a specific
|
|
||||||
hook::
|
|
||||||
|
|
||||||
def pytest_runtest_init(session):
|
1. in distributed testing the master process would setup test resources
|
||||||
# called ahead of pytest_runtestloop() test execution
|
that are never needed because it only co-ordinates the test run
|
||||||
|
activities of the slave processes.
|
||||||
|
|
||||||
This hook will only be called in processes that actually run tests.
|
2. In large test suites resources are created which might not be needed
|
||||||
|
for the concrete test run.
|
||||||
|
|
||||||
The second issue is solved through a new register/getresource API which
|
3. Thirdly, even if you only perform a collection (with "--collectonly")
|
||||||
will only ever setup resources if they are needed. See the following
|
resource-setup will be executed.
|
||||||
examples and sections on how this works.
|
|
||||||
|
|
||||||
|
4. there is no place way to allow global parametrized collection and setup
|
||||||
|
|
||||||
|
The existing hooks are not a good place regarding these issues. pytest-2.X
|
||||||
|
solves all these issues through the introduction of two specific hooks
|
||||||
|
(and the new register_factory/getresource API)::
|
||||||
|
|
||||||
|
def pytest_init_collection(session):
|
||||||
|
# called ahead of pytest_collection, which implements the
|
||||||
|
# collection process
|
||||||
|
|
||||||
|
def pytest_init_runtestloop(session):
|
||||||
|
# called ahead of pytest_runtestloop() which executes the
|
||||||
|
# setup and calling of tests
|
||||||
|
|
||||||
|
The pytest_init_collection hook can be used for registering resources,
|
||||||
|
see `global resource management`_ and `parametrizing global resources`_.
|
||||||
|
|
||||||
|
The init_runtests can be used to setup and/or interact with global
|
||||||
|
resources. If you just use a global resource, you may explicitely
|
||||||
|
use it in a function argument or through a `class resource attribute`_.
|
||||||
|
|
||||||
|
.. _`global resource management`:
|
||||||
|
|
||||||
managing a global database resource
|
managing a global database resource
|
||||||
---------------------------------------------------------------
|
---------------------------------------------------------------
|
||||||
|
@ -41,6 +68,8 @@ managing a global database resource
|
||||||
If you have one database object which you want to use in tests
|
If you have one database object which you want to use in tests
|
||||||
you can write the following into a conftest.py file::
|
you can write the following into a conftest.py file::
|
||||||
|
|
||||||
|
# contest of conftest.py
|
||||||
|
|
||||||
class Database:
|
class Database:
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
print ("database instance created")
|
print ("database instance created")
|
||||||
|
@ -52,51 +81,71 @@ you can write the following into a conftest.py file::
|
||||||
node.addfinalizer(db.destroy)
|
node.addfinalizer(db.destroy)
|
||||||
return db
|
return db
|
||||||
|
|
||||||
def pytest_runtest_init(session):
|
def pytest_init_collection(session):
|
||||||
session.register_resource("db", factory_db, atnode=session)
|
session.register_factory("db", factory_db)
|
||||||
|
|
||||||
You can then access the constructed resource in a test like this::
|
You can then access the constructed resource in a test by specifying
|
||||||
|
the pre-registered name in your function definition::
|
||||||
|
|
||||||
def test_something(db):
|
def test_something(db):
|
||||||
...
|
...
|
||||||
|
|
||||||
The "db" function argument will lead to a lookup of the respective
|
The "db" function argument will lead to a lookup and call of the respective
|
||||||
factory value and be passed to the function body. According to the
|
factory function and its result will be passed to the function body.
|
||||||
registration, the db object will be instantiated on a per-session basis
|
As the factory is registered on the session, it will by default only
|
||||||
and thus reused across all test functions that require it.
|
get called once per session and its value will thus be re-used across
|
||||||
|
the whole test session.
|
||||||
|
|
||||||
instantiating a database resource per-module
|
Previously, factories would need to call the ``request.cached_setup()``
|
||||||
|
method to manage caching. Here is how we could implement the above
|
||||||
|
with traditional funcargs::
|
||||||
|
|
||||||
|
# content of conftest.py
|
||||||
|
class DataBase:
|
||||||
|
... as above
|
||||||
|
|
||||||
|
def pytest_funcarg__db(request):
|
||||||
|
return request.cached_setup(setup=DataBase,
|
||||||
|
teardown=lambda db: db.destroy,
|
||||||
|
scope="session")
|
||||||
|
|
||||||
|
As the funcarg factory is automatically registered by detecting its
|
||||||
|
name and because it is called each time "db" is requested, it needs
|
||||||
|
to care for caching itself, here by calling the cached_setup() method
|
||||||
|
to manage it. As it encodes the caching scope in the factory code body,
|
||||||
|
py.test has no way to report this via e. g. "py.test --funcargs".
|
||||||
|
More seriously, it's not exactly trivial to provide parametrization:
|
||||||
|
we would need to add a "parametrize" decorator where the resource is
|
||||||
|
used or implement a pytest_generate_tests(metafunc) hook to
|
||||||
|
call metafunc.parametrize() with the "db" argument, and then the
|
||||||
|
factory would need to care to pass the appropriate "extrakey" into
|
||||||
|
cached_setup(). By contrast, the new way just requires a modified
|
||||||
|
call to register factories::
|
||||||
|
|
||||||
|
def pytest_init_collection(session):
|
||||||
|
session.register_factory("db", [factory_mysql, factory_pg])
|
||||||
|
|
||||||
|
and no other code needs to change or get decorated.
|
||||||
|
|
||||||
|
(NEW) instantiating one database for each test module
|
||||||
---------------------------------------------------------------
|
---------------------------------------------------------------
|
||||||
|
|
||||||
If you want one database instance per test module you can restrict
|
If you want one database instance per test module you can restrict
|
||||||
caching by modifying the "atnode" parameter of the registration
|
caching by modifying the "scopenode" parameter of the registration
|
||||||
call above::
|
call above:
|
||||||
|
|
||||||
def pytest_runtest_init(session):
|
def pytest_init_collection(session):
|
||||||
session.register_resource("db", factory_db, atnode=pytest.Module)
|
session.register_factory("db", factory_db, scopenode=pytest.Module)
|
||||||
|
|
||||||
Neither the tests nor the factory function will need to change.
|
Neither the tests nor the factory function will need to change.
|
||||||
This also means that you can decide the scoping of resources
|
This means that you can decide the scoping of resources at runtime -
|
||||||
at runtime - e.g. based on a command line option: for developer
|
e.g. based on a command line option: for developer settings you might
|
||||||
settings you might want per-session and for Continous Integration
|
want per-session and for Continous Integration runs you might prefer
|
||||||
runs you might prefer per-module or even per-function scope like this::
|
per-module or even per-function scope like this::
|
||||||
|
|
||||||
def pytest_runtest_init(session):
|
|
||||||
session.register_resource_factory("db", factory_db,
|
|
||||||
atnode=pytest.Function)
|
|
||||||
|
|
||||||
parametrized resources
|
|
||||||
----------------------------------
|
|
||||||
|
|
||||||
If you want to rerun tests with different resource values you can specify
|
|
||||||
a list of factories instead of just one::
|
|
||||||
|
|
||||||
def pytest_runtest_init(session):
|
|
||||||
session.register_factory("db", [factory1, factory2], atnode=session)
|
|
||||||
|
|
||||||
In this case all tests that depend on the "db" resource will be run twice
|
|
||||||
using the respective values obtained from the two factory functions.
|
|
||||||
|
|
||||||
|
def pytest_init_collection(session):
|
||||||
|
session.register_factory("db", factory_db,
|
||||||
|
scopenode=pytest.Function)
|
||||||
|
|
||||||
Using a resource from another resource factory
|
Using a resource from another resource factory
|
||||||
----------------------------------------------
|
----------------------------------------------
|
||||||
|
@ -105,9 +154,11 @@ You can use the database resource from a another resource factory through
|
||||||
the ``node.getresource()`` method. Let's add a resource factory for
|
the ``node.getresource()`` method. Let's add a resource factory for
|
||||||
a "db_users" table at module-level, extending the previous db-example::
|
a "db_users" table at module-level, extending the previous db-example::
|
||||||
|
|
||||||
def pytest_runtest_init(session):
|
def pytest_init_collection(session):
|
||||||
...
|
...
|
||||||
session.register_factory("db_users", createusers, atnode=module)
|
# this factory will be using a scopenode=pytest.Module because
|
||||||
|
# it is defined in a test module.
|
||||||
|
session.register_factory("db_users", createusers)
|
||||||
|
|
||||||
def createusers(name, node):
|
def createusers(name, node):
|
||||||
db = node.getresource("db")
|
db = node.getresource("db")
|
||||||
|
@ -125,43 +176,194 @@ is not available at a more general scope. Concretely, if you
|
||||||
table is defined as a per-session resource and the database object as a
|
table is defined as a per-session resource and the database object as a
|
||||||
per-module one, the table creation cannot work on a per-session basis.
|
per-module one, the table creation cannot work on a per-session basis.
|
||||||
|
|
||||||
|
amending/decorating a resource / funcarg__ compatibility
|
||||||
|
----------------------------------------------------------------------
|
||||||
|
|
||||||
Setting resources as class attributes
|
If you want to decorate a session-registered resource with
|
||||||
|
a test-module one, you can do the following::
|
||||||
|
|
||||||
|
# content of conftest.py
|
||||||
|
def pytest_init_collection(session):
|
||||||
|
session.register_factory("db_users", createusers)
|
||||||
|
|
||||||
|
This will register a db_users method on a per-session basis.
|
||||||
|
If you want to create a dummy user such that all test
|
||||||
|
methods in a test module can work with it::
|
||||||
|
|
||||||
|
# content of test_user_admin.py
|
||||||
|
def setup_class(cls, db_users):
|
||||||
|
|
||||||
|
def pytest_init_collection(session):
|
||||||
|
session.register_factory("db_users", createcreate_users,
|
||||||
|
scopenode=pytest.Module)
|
||||||
|
|
||||||
|
def create_users(name, node):
|
||||||
|
# get the session-managed resource
|
||||||
|
db_users = node.getresource(name)
|
||||||
|
# add a user and define a remove_user undo function
|
||||||
|
...
|
||||||
|
node.addfinalizer(remove_user)
|
||||||
|
return db_users
|
||||||
|
|
||||||
|
def test_user_fields(db_users):
|
||||||
|
# work with db_users with a pre-created entry
|
||||||
|
...
|
||||||
|
|
||||||
|
Using the pytest_funcarg__ mechanism, you can do the equivalent::
|
||||||
|
|
||||||
|
# content of test_user_admin.py
|
||||||
|
|
||||||
|
def pytest_funcarg__db_users(request):
|
||||||
|
def create_user():
|
||||||
|
db_users = request.getfuncargvalue("db_users")
|
||||||
|
# add a user
|
||||||
|
return db_users
|
||||||
|
def remove_user(db_users):
|
||||||
|
...
|
||||||
|
return request.cached_setup(create_user, remove_user, scope="module")
|
||||||
|
|
||||||
|
As the funcarg mechanism is implemented in terms of the new API
|
||||||
|
it's also possible to mix - use register_factory/getresource at plugin-level
|
||||||
|
and pytest_funcarg__ factories at test module level.
|
||||||
|
|
||||||
|
As discussed previously with `global resource management`_, the funcarg-factory
|
||||||
|
does not easily extend to provide parametrization.
|
||||||
|
|
||||||
|
|
||||||
|
.. _`class resource attributes`:
|
||||||
|
|
||||||
|
(NEW) Setting resources as class attributes
|
||||||
-------------------------------------------
|
-------------------------------------------
|
||||||
|
|
||||||
If you want to make an attribute available on a test class, you can
|
If you want to make an attribute available on a test class, you can
|
||||||
use the resource_attr marker::
|
use a new mark::
|
||||||
|
|
||||||
@pytest.mark.resource_attr("db")
|
@pytest.mark.class_resource("db")
|
||||||
class TestClass:
|
class TestClass:
|
||||||
def test_something(self):
|
def test_something(self):
|
||||||
#use self.db
|
#use self.db
|
||||||
|
|
||||||
Note that this way of using resources can be used on unittest.TestCase
|
Note that this way of using resources work with unittest.TestCase-style
|
||||||
instances as well (function arguments can not be added due to unittest
|
tests as well. If you have defined "db" as a parametrized resource,
|
||||||
limitations).
|
the functions of the Test class will be run multiple times with different
|
||||||
|
values found in "self.db".
|
||||||
|
|
||||||
|
Previously, pytest could not offer its resource management features
|
||||||
|
since those were tied to passing function arguments ("funcargs") and
|
||||||
|
this cannot be easily integrated with the unittest framework and its
|
||||||
|
common per-project customizations.
|
||||||
|
|
||||||
|
|
||||||
How the funcarg mechanism is implemented (internal notes)
|
.. _`parametrizing global resources`:
|
||||||
|
|
||||||
|
(NEW) parametrizing global resources
|
||||||
|
----------------------------------------------------
|
||||||
|
|
||||||
|
If you want to rerun tests with different resource values you can specify
|
||||||
|
a list of factories instead of just one::
|
||||||
|
|
||||||
|
def pytest_init_collection(session):
|
||||||
|
session.register_factory("db", [factory1, factory2])
|
||||||
|
|
||||||
|
In this case all tests that require the "db" resource will be run twice
|
||||||
|
using the respective values obtained from the two factory functions.
|
||||||
|
|
||||||
|
For reporting purposes you might want to also define identifiers
|
||||||
|
for the db values::
|
||||||
|
|
||||||
|
def pytest_init_collection(session):
|
||||||
|
session.register_factory("db", [factory1, factory2],
|
||||||
|
ids=["mysql", "pg"])
|
||||||
|
|
||||||
|
This will make pytest use the respective id values when reporting
|
||||||
|
nodeids.
|
||||||
|
|
||||||
|
|
||||||
|
(New) Declaring resource usage / implicit parametrization
|
||||||
|
----------------------------------------------------------
|
||||||
|
|
||||||
|
Sometimes you may have a resource that can work in multiple variants,
|
||||||
|
like using different database backends. As another use-case,
|
||||||
|
pytest's own test suite uses a "testdir" funcarg which helps to setup
|
||||||
|
example scenarios, perform a subprocess-pytest run and check the output.
|
||||||
|
However, there are many features that should also work with the pytest-xdist
|
||||||
|
mode, distributing tests to multiple CPUs or hosts. The invocation
|
||||||
|
variants are not visible in the function signature and cannot be easily
|
||||||
|
addressed through a "parametrize" decorator or call. Nevertheless we want
|
||||||
|
to have both invocation variants to be collected and executed.
|
||||||
|
|
||||||
|
The solution is to tell pytest that you are using a resource implicitely::
|
||||||
|
|
||||||
|
@pytest.mark.uses_resource("invocation-option")
|
||||||
|
class TestClass:
|
||||||
|
def test_method(self, testdir):
|
||||||
|
...
|
||||||
|
|
||||||
|
When the testdir factory gets the parametrized "invocation-option"
|
||||||
|
resource, it will see different values, depending on what the respective
|
||||||
|
factories provide. To register the invocation-mode factory you would write::
|
||||||
|
|
||||||
|
# content of conftest.py
|
||||||
|
def pytest_init_collection(session):
|
||||||
|
session.register_factory("invocation-option",
|
||||||
|
[lambda **kw: "", lambda **kw: "-n1"])
|
||||||
|
|
||||||
|
The testdir factory can then access it easily::
|
||||||
|
|
||||||
|
option = node.getresource("invocation-option", "")
|
||||||
|
...
|
||||||
|
|
||||||
|
.. note::
|
||||||
|
|
||||||
|
apart from the "uses_resource" decoration none of the already
|
||||||
|
written test functions needs to be modified for the new API.
|
||||||
|
|
||||||
|
The implicit "testdir" parametrization only happens for the tests
|
||||||
|
which declare use of the invocation-option resource. All other
|
||||||
|
tests will get the default value passed as the second parameter
|
||||||
|
to node.getresource() above. You can thus restrict
|
||||||
|
running the variants to particular tests or test sets.
|
||||||
|
|
||||||
|
To conclude, these three code fragments work together to allow efficient
|
||||||
|
cross-session resource parametrization.
|
||||||
|
|
||||||
|
|
||||||
|
Implementation and compatibility notes
|
||||||
|
============================================================
|
||||||
|
|
||||||
|
The new API is designed to support all existing resource parametrization
|
||||||
|
and funcarg usages. This chapter discusses implementation aspects.
|
||||||
|
Feel free to choose ignorance and only consider the above usage-level.
|
||||||
|
|
||||||
|
Implementing the funcarg mechanism in terms of the new API
|
||||||
-------------------------------------------------------------
|
-------------------------------------------------------------
|
||||||
|
|
||||||
Prior to pytest-2.3/4, pytest advertised the "funcarg" mechanism
|
Prior to pytest-2.X, pytest mainly advertised the "funcarg" mechanism
|
||||||
which provided a subset functionality to the generalized resource management.
|
for resource management. It provides automatic registration of
|
||||||
In fact, the previous mechanism is implemented in terms of the new API
|
factories through discovery of ``pytest_funcarg__NAME`` factory methods
|
||||||
and should continue to work unmodified. It basically automates the
|
on plugins, test modules, classes and functions. Those factories are be
|
||||||
registration of factories through automatic discovery of
|
called *each time* a resource (funcarg) is required, hence the support
|
||||||
``pytest_funcarg_NAME`` function on plugins, Python modules and classes.
|
for a ``request.cached_setup" method which helps to cache resources
|
||||||
|
across calls. Request objects internally keep a (item, requested_name,
|
||||||
|
remaining-factories) state. The "reamaining-factories" state is
|
||||||
|
used for implementing decorating factories; a factory for a given
|
||||||
|
name can call ``getfuncargvalue(name)`` to invoke the next-matching
|
||||||
|
factory factories and then amend the return value.
|
||||||
|
|
||||||
As an example let's consider the Module.setup() method::
|
In order to implement the existing funcarg mechanism through
|
||||||
|
the new API, the new API needs to internally keep around similar
|
||||||
|
state. XXX
|
||||||
|
|
||||||
|
As an example let's consider the Module.setup_collect() method::
|
||||||
|
|
||||||
class Module(PyCollector):
|
class Module(PyCollector):
|
||||||
def setup(self):
|
def setup_collect(self):
|
||||||
for name, func in self.obj.__dict__.items():
|
for name, func in self.obj.__dict__.items():
|
||||||
if name.startswith("pytest_funcarg__"):
|
if name.startswith("pytest_funcarg__"):
|
||||||
resourcename = name[len("pytest_funcarg__"):]
|
resourcename = name[len("pytest_funcarg__"):]
|
||||||
self._register_factory(resourcename,
|
self.register_factory(resourcename,
|
||||||
RequestAdapter(self, name, func))
|
RequestAdapter(self, name, func))
|
||||||
|
|
||||||
The request adapater takes care to provide the pre-2.3 API for funcarg
|
The request adapater takes care to provide the pre-2.X API for funcarg
|
||||||
factories, providing request.cached_setup/addfinalizer/getfuncargvalue
|
factories, i.e. request.cached_setup/addfinalizer/getfuncargvalue
|
||||||
methods.
|
methods and some attributes.
|
||||||
|
|
|
@ -61,7 +61,8 @@ factory. Running the test looks like this::
|
||||||
|
|
||||||
$ py.test test_simplefactory.py
|
$ py.test test_simplefactory.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 1 items
|
collecting ... collected 1 items
|
||||||
|
|
||||||
test_simplefactory.py F
|
test_simplefactory.py F
|
||||||
|
@ -76,7 +77,7 @@ factory. Running the test looks like this::
|
||||||
E assert 42 == 17
|
E assert 42 == 17
|
||||||
|
|
||||||
test_simplefactory.py:5: AssertionError
|
test_simplefactory.py:5: AssertionError
|
||||||
========================= 1 failed in 0.01 seconds =========================
|
========================= 1 failed in 0.02 seconds =========================
|
||||||
|
|
||||||
This shows that the test function was called with a ``myfuncarg``
|
This shows that the test function was called with a ``myfuncarg``
|
||||||
argument value of ``42`` and the assert fails as expected. Here is
|
argument value of ``42`` and the assert fails as expected. Here is
|
||||||
|
@ -154,7 +155,8 @@ Running this will generate ten invocations of ``test_func`` passing in each of t
|
||||||
|
|
||||||
$ py.test test_example.py
|
$ py.test test_example.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 10 items
|
collecting ... collected 10 items
|
||||||
|
|
||||||
test_example.py .........F
|
test_example.py .........F
|
||||||
|
@ -169,7 +171,7 @@ Running this will generate ten invocations of ``test_func`` passing in each of t
|
||||||
E assert 9 < 9
|
E assert 9 < 9
|
||||||
|
|
||||||
test_example.py:6: AssertionError
|
test_example.py:6: AssertionError
|
||||||
==================== 1 failed, 9 passed in 0.02 seconds ====================
|
==================== 1 failed, 9 passed in 0.03 seconds ====================
|
||||||
|
|
||||||
Obviously, only when ``numiter`` has the value of ``9`` does the test fail. Note that the ``pytest_generate_tests(metafunc)`` hook is called during
|
Obviously, only when ``numiter`` has the value of ``9`` does the test fail. Note that the ``pytest_generate_tests(metafunc)`` hook is called during
|
||||||
the test collection phase which is separate from the actual test running.
|
the test collection phase which is separate from the actual test running.
|
||||||
|
@ -177,7 +179,8 @@ Let's just look at what is collected::
|
||||||
|
|
||||||
$ py.test --collectonly test_example.py
|
$ py.test --collectonly test_example.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 10 items
|
collecting ... collected 10 items
|
||||||
<Module 'test_example.py'>
|
<Module 'test_example.py'>
|
||||||
<Function 'test_func[0]'>
|
<Function 'test_func[0]'>
|
||||||
|
@ -191,19 +194,39 @@ Let's just look at what is collected::
|
||||||
<Function 'test_func[8]'>
|
<Function 'test_func[8]'>
|
||||||
<Function 'test_func[9]'>
|
<Function 'test_func[9]'>
|
||||||
|
|
||||||
============================= in 0.00 seconds =============================
|
============================= in 0.02 seconds =============================
|
||||||
|
|
||||||
If you want to select only the run with the value ``7`` you could do::
|
If you want to select only the run with the value ``7`` you could do::
|
||||||
|
|
||||||
$ py.test -v -k 7 test_example.py # or -k test_func[7]
|
$ py.test -v -k 7 test_example.py # or -k test_func[7]
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4 -- /home/hpk/venv/0/bin/python
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2 -- /home/hpk/venv/1/bin/python
|
||||||
|
cachedir: /home/hpk/tmp/doc-exec-271/.cache
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 10 items
|
collecting ... collected 10 items
|
||||||
|
|
||||||
|
test_example.py:5: test_func[0] PASSED
|
||||||
|
test_example.py:5: test_func[1] PASSED
|
||||||
|
test_example.py:5: test_func[2] PASSED
|
||||||
|
test_example.py:5: test_func[3] PASSED
|
||||||
|
test_example.py:5: test_func[4] PASSED
|
||||||
|
test_example.py:5: test_func[5] PASSED
|
||||||
|
test_example.py:5: test_func[6] PASSED
|
||||||
test_example.py:5: test_func[7] PASSED
|
test_example.py:5: test_func[7] PASSED
|
||||||
|
test_example.py:5: test_func[8] PASSED
|
||||||
|
test_example.py:5: test_func[9] FAILED
|
||||||
|
|
||||||
======================= 9 tests deselected by '-k7' ========================
|
================================= FAILURES =================================
|
||||||
================== 1 passed, 9 deselected in 0.01 seconds ==================
|
_______________________________ test_func[9] _______________________________
|
||||||
|
|
||||||
|
numiter = 9
|
||||||
|
|
||||||
|
def test_func(numiter):
|
||||||
|
> assert numiter < 9
|
||||||
|
E assert 9 < 9
|
||||||
|
|
||||||
|
test_example.py:6: AssertionError
|
||||||
|
==================== 1 failed, 9 passed in 0.03 seconds ====================
|
||||||
|
|
||||||
You might want to look at :ref:`more parametrization examples <paramexamples>`.
|
You might want to look at :ref:`more parametrization examples <paramexamples>`.
|
||||||
|
|
||||||
|
|
|
@ -22,9 +22,14 @@ Installation options::
|
||||||
To check your installation has installed the correct version::
|
To check your installation has installed the correct version::
|
||||||
|
|
||||||
$ py.test --version
|
$ py.test --version
|
||||||
This is py.test version 2.2.4, imported from /home/hpk/p/pytest/pytest.py
|
This is py.test version 2.3.0.dev2, imported from /home/hpk/p/pytest/pytest.pyc
|
||||||
setuptools registered plugins:
|
setuptools registered plugins:
|
||||||
pytest-xdist-1.8 at /home/hpk/p/pytest-xdist/xdist/plugin.pyc
|
pytest-xdist-1.8 at /home/hpk/p/pytest-xdist/xdist/plugin.pyc
|
||||||
|
pytest-bugzilla-0.1 at /home/hpk/tmp/eanxgeek/pytest_bugzilla.pyc
|
||||||
|
pytest-cache-0.9 at /home/hpk/p/pytest-cache/pytest_cache.pyc
|
||||||
|
oejskit-0.9.0 at /home/hpk/p/js-infrastructure/oejskit/pytest_jstests.pyc
|
||||||
|
pytest-pep8-1.0.1 at /home/hpk/venv/1/local/lib/python2.7/site-packages/pytest_pep8.pyc
|
||||||
|
pytest-cov-1.6 at /home/hpk/venv/1/local/lib/python2.7/site-packages/pytest_cov.pyc
|
||||||
|
|
||||||
If you get an error checkout :ref:`installation issues`.
|
If you get an error checkout :ref:`installation issues`.
|
||||||
|
|
||||||
|
@ -46,7 +51,8 @@ That's it. You can execute the test function now::
|
||||||
|
|
||||||
$ py.test
|
$ py.test
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 1 items
|
collecting ... collected 1 items
|
||||||
|
|
||||||
test_sample.py F
|
test_sample.py F
|
||||||
|
@ -60,7 +66,7 @@ That's it. You can execute the test function now::
|
||||||
E + where 4 = func(3)
|
E + where 4 = func(3)
|
||||||
|
|
||||||
test_sample.py:5: AssertionError
|
test_sample.py:5: AssertionError
|
||||||
========================= 1 failed in 0.01 seconds =========================
|
========================= 1 failed in 0.02 seconds =========================
|
||||||
|
|
||||||
py.test found the ``test_answer`` function by following :ref:`standard test discovery rules <test discovery>`, basically detecting the ``test_`` prefixes. We got a failure report because our little ``func(3)`` call did not return ``5``.
|
py.test found the ``test_answer`` function by following :ref:`standard test discovery rules <test discovery>`, basically detecting the ``test_`` prefixes. We got a failure report because our little ``func(3)`` call did not return ``5``.
|
||||||
|
|
||||||
|
@ -95,7 +101,7 @@ Running it with, this time in "quiet" reporting mode::
|
||||||
$ py.test -q test_sysexit.py
|
$ py.test -q test_sysexit.py
|
||||||
collecting ... collected 1 items
|
collecting ... collected 1 items
|
||||||
.
|
.
|
||||||
1 passed in 0.00 seconds
|
1 passed in 0.02 seconds
|
||||||
|
|
||||||
.. todo:: For further ways to assert exceptions see the `raises`
|
.. todo:: For further ways to assert exceptions see the `raises`
|
||||||
|
|
||||||
|
@ -126,7 +132,7 @@ run the module by passing its filename::
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
____________________________ TestClass.test_two ____________________________
|
____________________________ TestClass.test_two ____________________________
|
||||||
|
|
||||||
self = <test_class.TestClass instance at 0x1a956c8>
|
self = <test_class.TestClass instance at 0x2343830>
|
||||||
|
|
||||||
def test_two(self):
|
def test_two(self):
|
||||||
x = "hello"
|
x = "hello"
|
||||||
|
@ -134,7 +140,7 @@ run the module by passing its filename::
|
||||||
E assert hasattr('hello', 'check')
|
E assert hasattr('hello', 'check')
|
||||||
|
|
||||||
test_class.py:8: AssertionError
|
test_class.py:8: AssertionError
|
||||||
1 failed, 1 passed in 0.01 seconds
|
1 failed, 1 passed in 0.02 seconds
|
||||||
|
|
||||||
The first test passed, the second failed. Again we can easily see
|
The first test passed, the second failed. Again we can easily see
|
||||||
the intermediate values used in the assertion, helping us to
|
the intermediate values used in the assertion, helping us to
|
||||||
|
@ -163,7 +169,7 @@ before performing the test function call. Let's just run it::
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_____________________________ test_needsfiles ______________________________
|
_____________________________ test_needsfiles ______________________________
|
||||||
|
|
||||||
tmpdir = local('/tmp/pytest-22/test_needsfiles0')
|
tmpdir = local('/home/hpk/tmp/pytest-2885/test_needsfiles0')
|
||||||
|
|
||||||
def test_needsfiles(tmpdir):
|
def test_needsfiles(tmpdir):
|
||||||
print tmpdir
|
print tmpdir
|
||||||
|
@ -172,8 +178,8 @@ before performing the test function call. Let's just run it::
|
||||||
|
|
||||||
test_tmpdir.py:3: AssertionError
|
test_tmpdir.py:3: AssertionError
|
||||||
----------------------------- Captured stdout ------------------------------
|
----------------------------- Captured stdout ------------------------------
|
||||||
/tmp/pytest-22/test_needsfiles0
|
/home/hpk/tmp/pytest-2885/test_needsfiles0
|
||||||
1 failed in 0.01 seconds
|
1 failed in 0.22 seconds
|
||||||
|
|
||||||
Before the test runs, a unique-per-test-invocation temporary directory
|
Before the test runs, a unique-per-test-invocation temporary directory
|
||||||
was created. More info at :ref:`tmpdir handling`.
|
was created. More info at :ref:`tmpdir handling`.
|
||||||
|
|
|
@ -37,7 +37,7 @@ Welcome to pytest!
|
||||||
|
|
||||||
- **integrates many common testing methods**
|
- **integrates many common testing methods**
|
||||||
|
|
||||||
- can integrate ``nose``, ``unittest.py`` and ``doctest.py`` style
|
- can run many ``nose``, ``unittest.py`` and ``doctest.py`` style
|
||||||
tests, including running testcases made for Django and trial
|
tests, including running testcases made for Django and trial
|
||||||
- supports extended :ref:`xUnit style setup <xunitsetup>`
|
- supports extended :ref:`xUnit style setup <xunitsetup>`
|
||||||
- supports domain-specific :ref:`non-python tests`
|
- supports domain-specific :ref:`non-python tests`
|
||||||
|
|
|
@ -130,7 +130,8 @@ Running it with the report-on-xfail option gives this output::
|
||||||
|
|
||||||
example $ py.test -rx xfail_demo.py
|
example $ py.test -rx xfail_demo.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 6 items
|
collecting ... collected 6 items
|
||||||
|
|
||||||
xfail_demo.py xxxxxx
|
xfail_demo.py xxxxxx
|
||||||
|
@ -147,7 +148,7 @@ Running it with the report-on-xfail option gives this output::
|
||||||
XFAIL xfail_demo.py::test_hello6
|
XFAIL xfail_demo.py::test_hello6
|
||||||
reason: reason
|
reason: reason
|
||||||
|
|
||||||
======================== 6 xfailed in 0.03 seconds =========================
|
======================== 6 xfailed in 0.04 seconds =========================
|
||||||
|
|
||||||
.. _`evaluation of skipif/xfail conditions`:
|
.. _`evaluation of skipif/xfail conditions`:
|
||||||
|
|
||||||
|
|
|
@ -28,7 +28,8 @@ Running this would result in a passed test except for the last
|
||||||
|
|
||||||
$ py.test test_tmpdir.py
|
$ py.test test_tmpdir.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 1 items
|
collecting ... collected 1 items
|
||||||
|
|
||||||
test_tmpdir.py F
|
test_tmpdir.py F
|
||||||
|
@ -36,7 +37,7 @@ Running this would result in a passed test except for the last
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_____________________________ test_create_file _____________________________
|
_____________________________ test_create_file _____________________________
|
||||||
|
|
||||||
tmpdir = local('/tmp/pytest-23/test_create_file0')
|
tmpdir = local('/home/hpk/tmp/pytest-2886/test_create_file0')
|
||||||
|
|
||||||
def test_create_file(tmpdir):
|
def test_create_file(tmpdir):
|
||||||
p = tmpdir.mkdir("sub").join("hello.txt")
|
p = tmpdir.mkdir("sub").join("hello.txt")
|
||||||
|
@ -47,7 +48,7 @@ Running this would result in a passed test except for the last
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_tmpdir.py:7: AssertionError
|
test_tmpdir.py:7: AssertionError
|
||||||
========================= 1 failed in 0.02 seconds =========================
|
========================= 1 failed in 0.23 seconds =========================
|
||||||
|
|
||||||
.. _`base temporary directory`:
|
.. _`base temporary directory`:
|
||||||
|
|
||||||
|
|
|
@ -24,7 +24,8 @@ Running it yields::
|
||||||
|
|
||||||
$ py.test test_unittest.py
|
$ py.test test_unittest.py
|
||||||
=========================== test session starts ============================
|
=========================== test session starts ============================
|
||||||
platform linux2 -- Python 2.7.1 -- pytest-2.2.4
|
platform linux2 -- Python 2.7.3 -- pytest-2.3.0.dev2
|
||||||
|
plugins: xdist, bugzilla, cache, oejskit, pep8, cov
|
||||||
collecting ... collected 1 items
|
collecting ... collected 1 items
|
||||||
|
|
||||||
test_unittest.py F
|
test_unittest.py F
|
||||||
|
@ -42,7 +43,7 @@ Running it yields::
|
||||||
test_unittest.py:8: AssertionError
|
test_unittest.py:8: AssertionError
|
||||||
----------------------------- Captured stdout ------------------------------
|
----------------------------- Captured stdout ------------------------------
|
||||||
hello
|
hello
|
||||||
========================= 1 failed in 0.01 seconds =========================
|
========================= 1 failed in 0.03 seconds =========================
|
||||||
|
|
||||||
.. _`unittest.py style`: http://docs.python.org/library/unittest.html
|
.. _`unittest.py style`: http://docs.python.org/library/unittest.html
|
||||||
|
|
||||||
|
|
|
@ -185,7 +185,7 @@ hook was invoked::
|
||||||
$ python myinvoke.py
|
$ python myinvoke.py
|
||||||
collecting ... collected 0 items
|
collecting ... collected 0 items
|
||||||
|
|
||||||
in 0.00 seconds
|
in 0.01 seconds
|
||||||
*** test run reporting finishing
|
*** test run reporting finishing
|
||||||
|
|
||||||
.. include:: links.inc
|
.. include:: links.inc
|
||||||
|
|
Loading…
Reference in New Issue