some doc fixes and improvements to parametrized test examples, thanks ccxCZ for review and suggestions.

This commit is contained in:
holger krekel 2011-02-09 14:55:21 +01:00
parent 2bd0c98801
commit d2f9b41519
6 changed files with 179 additions and 85 deletions

View File

@ -89,8 +89,8 @@ class MarkGenerator:
class MarkDecorator:
""" A decorator for test functions and test classes. When applied
it will create :class:`MarkInfo` objects which may be
:ref:`retrieved by hooks as item keywords` MarkDecorator instances
are usually created by writing::
:ref:`retrieved by hooks as item keywords <excontrolskip>`.
MarkDecorator instances are often created like this::
mark1 = py.test.mark.NAME # simple MarkDecorator
mark2 = py.test.mark.NAME(name1=value) # parametrized MarkDecorator

View File

@ -537,12 +537,10 @@ class Metafunc:
list of calls to the test function will be used.
:arg param: will be exposed to a later funcarg factory invocation
through the ``request.param`` attribute. Setting it (instead of
directly providing a ``funcargs`` ditionary) is called
*indirect parametrization*. Indirect parametrization is
preferable if test values are expensive to setup or can
only be created after certain fixtures or test-run related
initialization code has been run.
through the ``request.param`` attribute. It allows to
defer test fixture setup activities to when an actual
test is run. Note that request.addcall() is called during
the collection phase of a test run.
"""
assert funcargs is None or isinstance(funcargs, dict)
if id is None:
@ -556,7 +554,13 @@ class Metafunc:
self._calls.append(CallSpec(funcargs, id, param))
class FuncargRequest:
""" A request for function arguments from a test function. """
""" A request for function arguments from a test function.
Note that there is an optional ``param`` attribute in case
there was an invocation to metafunc.addcall(param=...).
If no such call was done in a ``pytest_generate_tests``
hook, the attribute will not be present.
"""
_argprefix = "pytest_funcarg__"
_argname = None

View File

@ -12,7 +12,7 @@ need more examples or have questions. Also take a look at the :ref:`comprehensiv
reportingdemo.txt
simple.txt
pythoncollection.txt
mysetup.txt
parametrize.txt
pythoncollection.txt
nonpython.txt

View File

@ -1,4 +1,6 @@
.. _paramexamples:
parametrizing tests
=================================================
@ -6,6 +8,137 @@ py.test allows to easily implement your own custom
parametrization scheme for tests. Here we provide
some examples for inspiration and re-use.
generating parameters combinations, depending on command line
----------------------------------------------------------------------------
.. regendoc:wipe
Let's say we want to execute a test with different parameters
and the parameter range shall be determined by a command
line argument. Let's first write a simple computation test::
# content of test_compute.py
def test_compute(param1):
assert param1 < 4
Now we add a test configuration like this::
# content of conftest.py
def pytest_addoption(parser):
parser.addoption("--all", action="store_true",
help="run all combinations")
def pytest_generate_tests(metafunc):
if 'param1' in metafunc.funcargnames:
if metafunc.config.option.all:
end = 5
else:
end = 2
for i in range(end):
metafunc.addcall(funcargs={'param1': i})
This means that we only run 2 tests if we do not pass ``--all``::
$ py.test -q test_compute.py
collecting ... collected 2 items
..
2 passed in 0.01 seconds
We run only two computations, so we see two dots.
let's run the full monty::
$ py.test -q --all
collecting ... collected 5 items
....F
================================= FAILURES =================================
_____________________________ test_compute[4] ______________________________
param1 = 4
def test_compute(param1):
> assert param1 < 4
E assert 4 < 4
test_compute.py:3: AssertionError
1 failed, 4 passed in 0.03 seconds
As expected when running the full range of ``param1`` values
we'll get an error on the last one.
Defering the setup of parametrizing resources
---------------------------------------------------
.. regendoc:wipe
The parametrization of test functions happens at collection
time. It is often a good idea to setup possibly expensive
resources only when the actual test is run. Here is a simple
example how you can achieve that::
# content of test_backends.py
import pytest
def test_db_initialized(db):
# a dummy test
if db.__class__.__name__ == "DB2":
pytest.fail("deliberately failing for demo purposes")
Now we add a test configuration that takes care to generate
two invocations of the ``test_db_initialized`` function and
furthermore a factory that creates a database object when
each test is actually run::
# content of conftest.py
def pytest_generate_tests(metafunc):
if 'db' in metafunc.funcargnames:
metafunc.addcall(param="d1")
metafunc.addcall(param="d2")
class DB1:
"one database object"
class DB2:
"alternative database object"
def pytest_funcarg__db(request):
if request.param == "d1":
return DB1()
elif request.param == "d2":
return DB2()
else:
raise ValueError("invalid internal test config")
Let's first see how it looks like at collection time::
$ py.test test_backends.py --collectonly
<Module 'test_backends.py'>
<Function 'test_db_initialized[0]'>
<Function 'test_db_initialized[1]'>
And then when we run the test::
$ py.test -q test_backends.py
collecting ... collected 2 items
.F
================================= FAILURES =================================
__________________________ test_db_initialized[1] __________________________
db = <conftest.DB2 instance at 0x1a5b488>
def test_db_initialized(db):
# a dummy test
if db.__class__.__name__ == "DB2":
> pytest.fail("deliberately failing for demo purposes")
E Failed: deliberately failing for demo purposes
test_backends.py:6: Failed
1 failed, 1 passed in 0.02 seconds
Now you see that one invocation of the test passes and another fails,
as it to be expected.
Parametrizing test methods through per-class configuration
--------------------------------------------------------------
@ -41,12 +174,23 @@ Running it means we are two tests for each test functions, using
the respective settings::
$ py.test -q
collecting ... collected 4 items
F..F
collecting ... collected 6 items
.FF..F
================================= FAILURES =================================
__________________________ test_db_initialized[1] __________________________
db = <conftest.DB2 instance at 0xf81c20>
def test_db_initialized(db):
# a dummy test
if db.__class__.__name__ == "DB2":
> pytest.fail("deliberately failing for demo purposes")
E Failed: deliberately failing for demo purposes
test_backends.py:6: Failed
_________________________ TestClass.test_equals[0] _________________________
self = <test_parametrize.TestClass instance at 0x1521440>, a = 1, b = 2
self = <test_parametrize.TestClass instance at 0xf93050>, a = 1, b = 2
def test_equals(self, a, b):
> assert a == b
@ -55,14 +199,14 @@ the respective settings::
test_parametrize.py:17: AssertionError
______________________ TestClass.test_zerodivision[1] ______________________
self = <test_parametrize.TestClass instance at 0x158aa70>, a = 3, b = 2
self = <test_parametrize.TestClass instance at 0xf93098>, a = 3, b = 2
def test_zerodivision(self, a, b):
> pytest.raises(ZeroDivisionError, "a/b")
E Failed: DID NOT RAISE
test_parametrize.py:20: Failed
2 failed, 2 passed in 0.03 seconds
3 failed, 3 passed in 0.04 seconds
Parametrizing test methods through a decorator
--------------------------------------------------------------
@ -103,7 +247,7 @@ Running it gives similar results as before::
================================= FAILURES =================================
_________________________ TestClass.test_equals[0] _________________________
self = <test_parametrize2.TestClass instance at 0x22a77e8>, a = 1, b = 2
self = <test_parametrize2.TestClass instance at 0x27e15a8>, a = 1, b = 2
@params([dict(a=1, b=2), dict(a=3, b=3), ])
def test_equals(self, a, b):
@ -113,7 +257,7 @@ Running it gives similar results as before::
test_parametrize2.py:19: AssertionError
______________________ TestClass.test_zerodivision[1] ______________________
self = <test_parametrize2.TestClass instance at 0x2332a70>, a = 3, b = 2
self = <test_parametrize2.TestClass instance at 0x2953bd8>, a = 3, b = 2
@params([dict(a=1, b=0), dict(a=3, b=2)])
def test_zerodivision(self, a, b):
@ -142,4 +286,4 @@ Running it (with Python-2.4 through to Python2.7 installed)::
. $ py.test -q multipython.py
collecting ... collected 75 items
....s....s....s....ssssss....s....s....s....ssssss....s....s....s....ssssss
48 passed, 27 skipped in 2.09 seconds
48 passed, 27 skipped in 1.59 seconds

View File

@ -84,64 +84,6 @@ rather pass in different or more complex objects. See the
next example or refer to :ref:`mysetup` for more information
on real-life examples.
generating parameters combinations, depending on command line
----------------------------------------------------------------------------
.. regendoc:wipe
Let's say we want to execute a test with different parameters
and the parameter range shall be determined by a command
line argument. Let's first write a simple computation test::
# content of test_compute.py
def test_compute(param1):
assert param1 < 4
Now we add a test configuration like this::
# content of conftest.py
def pytest_addoption(parser):
parser.addoption("--all", action="store_true",
help="run all combinations")
def pytest_generate_tests(metafunc):
if 'param1' in metafunc.funcargnames:
if metafunc.config.option.all:
end = 5
else:
end = 2
for i in range(end):
metafunc.addcall(funcargs={'param1': i})
This means that we only run 2 tests if we do not pass ``--all``::
$ py.test -q test_compute.py
collecting ... collected 2 items
..
2 passed in 0.01 seconds
We run only two computations, so we see two dots.
let's run the full monty::
$ py.test -q --all
collecting ... collected 5 items
....F
================================= FAILURES =================================
_____________________________ test_compute[4] ______________________________
param1 = 4
def test_compute(param1):
> assert param1 < 4
E assert 4 < 4
test_compute.py:3: AssertionError
1 failed, 4 passed in 0.03 seconds
As expected when running the full range of ``param1`` values
we'll get an error on the last one.
dynamically adding command line options
--------------------------------------------------------------
@ -167,15 +109,15 @@ directory with the above conftest.py::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0
gw0 I / gw1 I / gw2 I / gw3 I
gw0 [0] / gw1 [0] / gw2 [0] / gw3 [0]
scheduling tests via LoadScheduling
============================= in 0.29 seconds =============================
============================= in 0.37 seconds =============================
.. _`retrieved by hooks as item keywords`:
.. _`excontrolskip`:
control skipping of tests according to command line option
--------------------------------------------------------------
@ -214,12 +156,12 @@ and when running it will see a skipped "slow" test::
$ py.test -rs # "-rs" means report details on the little 's'
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0
collecting ... collected 2 items
test_module.py .s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-171/conftest.py:9: need --runslow option to run
SKIP [1] /tmp/doc-exec-275/conftest.py:9: need --runslow option to run
=================== 1 passed, 1 skipped in 0.02 seconds ====================
@ -227,7 +169,7 @@ Or run it including the ``slow`` marked test::
$ py.test --runslow
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0
collecting ... collected 2 items
test_module.py ..
@ -319,7 +261,7 @@ which will add the string to the test header accordingly::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0
project deps: mylib-1.1
collecting ... collected 0 items
@ -342,7 +284,7 @@ which will add info only when run with "--v"::
$ py.test -v
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 -- /home/hpk/venv/0/bin/python
platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0 -- /home/hpk/venv/0/bin/python
info1: did you know that ...
did you?
collecting ... collected 0 items
@ -353,7 +295,7 @@ and nothing when run plainly::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
platform linux2 -- Python 2.6.6 -- pytest-2.0.2.dev0
collecting ... collected 0 items
============================= in 0.00 seconds =============================

View File

@ -28,6 +28,8 @@ very useful if you want to test e.g. against different database backends
or with multiple numerical arguments sets and want to reuse the same set
of test functions.
.. _funcarg:
Basic funcarg example
-----------------------
@ -196,6 +198,8 @@ If you want to select only the run with the value ``7`` you could do::
======================== 9 tests deselected by '7' =========================
================== 1 passed, 9 deselected in 0.01 seconds ==================
You might want to look at :ref:`more parametrization examples <paramexamples>`.
.. _`metafunc object`:
The **metafunc** object