691 lines
		
	
	
		
			22 KiB
		
	
	
	
		
			ReStructuredText
		
	
	
	
			
		
		
	
	
			691 lines
		
	
	
		
			22 KiB
		
	
	
	
		
			ReStructuredText
		
	
	
	
| 
 | ||
| .. _paramexamples:
 | ||
| 
 | ||
| Parametrizing tests
 | ||
| =================================================
 | ||
| 
 | ||
| ``pytest`` allows to easily parametrize test functions.
 | ||
| For basic docs, see :ref:`parametrize-basics`.
 | ||
| 
 | ||
| In the following we provide some examples using
 | ||
| the builtin mechanisms.
 | ||
| 
 | ||
| Generating parameters combinations, depending on command line
 | ||
| ----------------------------------------------------------------------------
 | ||
| 
 | ||
| .. regendoc:wipe
 | ||
| 
 | ||
| Let's say we want to execute a test with different computation
 | ||
| parameters and the parameter range shall be determined by a command
 | ||
| line argument.  Let's first write a simple (do-nothing) computation test:
 | ||
| 
 | ||
| .. code-block:: python
 | ||
| 
 | ||
|     # content of test_compute.py
 | ||
| 
 | ||
| 
 | ||
|     def test_compute(param1):
 | ||
|         assert param1 < 4
 | ||
| 
 | ||
| Now we add a test configuration like this:
 | ||
| 
 | ||
| .. code-block:: python
 | ||
| 
 | ||
|     # content of conftest.py
 | ||
| 
 | ||
| 
 | ||
|     def pytest_addoption(parser):
 | ||
|         parser.addoption("--all", action="store_true", help="run all combinations")
 | ||
| 
 | ||
| 
 | ||
|     def pytest_generate_tests(metafunc):
 | ||
|         if "param1" in metafunc.fixturenames:
 | ||
|             if metafunc.config.getoption("all"):
 | ||
|                 end = 5
 | ||
|             else:
 | ||
|                 end = 2
 | ||
|             metafunc.parametrize("param1", range(end))
 | ||
| 
 | ||
| This means that we only run 2 tests if we do not pass ``--all``:
 | ||
| 
 | ||
| .. code-block:: pytest
 | ||
| 
 | ||
|     $ pytest -q test_compute.py
 | ||
|     ..                                                                   [100%]
 | ||
|     2 passed in 0.12s
 | ||
| 
 | ||
| We run only two computations, so we see two dots.
 | ||
| let's run the full monty:
 | ||
| 
 | ||
| .. code-block:: pytest
 | ||
| 
 | ||
|     $ pytest -q --all
 | ||
|     ....F                                                                [100%]
 | ||
|     ================================= FAILURES =================================
 | ||
|     _____________________________ test_compute[4] ______________________________
 | ||
| 
 | ||
|     param1 = 4
 | ||
| 
 | ||
|         def test_compute(param1):
 | ||
|     >       assert param1 < 4
 | ||
|     E       assert 4 < 4
 | ||
| 
 | ||
|     test_compute.py:4: AssertionError
 | ||
|     ========================= short test summary info ==========================
 | ||
|     FAILED test_compute.py::test_compute[4] - assert 4 < 4
 | ||
|     1 failed, 4 passed in 0.12s
 | ||
| 
 | ||
| As expected when running the full range of ``param1`` values
 | ||
| we'll get an error on the last one.
 | ||
| 
 | ||
| 
 | ||
| Different options for test IDs
 | ||
| ------------------------------------
 | ||
| 
 | ||
| pytest will build a string that is the test ID for each set of values in a
 | ||
| parametrized test. These IDs can be used with ``-k`` to select specific cases
 | ||
| to run, and they will also identify the specific case when one is failing.
 | ||
| Running pytest with ``--collect-only`` will show the generated IDs.
 | ||
| 
 | ||
| Numbers, strings, booleans and None will have their usual string representation
 | ||
| used in the test ID. For other objects, pytest will make a string based on
 | ||
| the argument name:
 | ||
| 
 | ||
| .. code-block:: python
 | ||
| 
 | ||
|     # content of test_time.py
 | ||
| 
 | ||
|     from datetime import datetime, timedelta
 | ||
| 
 | ||
|     import pytest
 | ||
| 
 | ||
|     testdata = [
 | ||
|         (datetime(2001, 12, 12), datetime(2001, 12, 11), timedelta(1)),
 | ||
|         (datetime(2001, 12, 11), datetime(2001, 12, 12), timedelta(-1)),
 | ||
|     ]
 | ||
| 
 | ||
| 
 | ||
|     @pytest.mark.parametrize("a,b,expected", testdata)
 | ||
|     def test_timedistance_v0(a, b, expected):
 | ||
|         diff = a - b
 | ||
|         assert diff == expected
 | ||
| 
 | ||
| 
 | ||
|     @pytest.mark.parametrize("a,b,expected", testdata, ids=["forward", "backward"])
 | ||
|     def test_timedistance_v1(a, b, expected):
 | ||
|         diff = a - b
 | ||
|         assert diff == expected
 | ||
| 
 | ||
| 
 | ||
|     def idfn(val):
 | ||
|         if isinstance(val, (datetime,)):
 | ||
|             # note this wouldn't show any hours/minutes/seconds
 | ||
|             return val.strftime("%Y%m%d")
 | ||
| 
 | ||
| 
 | ||
|     @pytest.mark.parametrize("a,b,expected", testdata, ids=idfn)
 | ||
|     def test_timedistance_v2(a, b, expected):
 | ||
|         diff = a - b
 | ||
|         assert diff == expected
 | ||
| 
 | ||
| 
 | ||
|     @pytest.mark.parametrize(
 | ||
|         "a,b,expected",
 | ||
|         [
 | ||
|             pytest.param(
 | ||
|                 datetime(2001, 12, 12), datetime(2001, 12, 11), timedelta(1), id="forward"
 | ||
|             ),
 | ||
|             pytest.param(
 | ||
|                 datetime(2001, 12, 11), datetime(2001, 12, 12), timedelta(-1), id="backward"
 | ||
|             ),
 | ||
|         ],
 | ||
|     )
 | ||
|     def test_timedistance_v3(a, b, expected):
 | ||
|         diff = a - b
 | ||
|         assert diff == expected
 | ||
| 
 | ||
| In ``test_timedistance_v0``, we let pytest generate the test IDs.
 | ||
| 
 | ||
| In ``test_timedistance_v1``, we specified ``ids`` as a list of strings which were
 | ||
| used as the test IDs. These are succinct, but can be a pain to maintain.
 | ||
| 
 | ||
| In ``test_timedistance_v2``, we specified ``ids`` as a function that can generate a
 | ||
| string representation to make part of the test ID. So our ``datetime`` values use the
 | ||
| label generated by ``idfn``, but because we didn't generate a label for ``timedelta``
 | ||
| objects, they are still using the default pytest representation:
 | ||
| 
 | ||
| .. code-block:: pytest
 | ||
| 
 | ||
|     $ pytest test_time.py --collect-only
 | ||
|     =========================== test session starts ============================
 | ||
|     platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
 | ||
|     rootdir: /home/sweet/project
 | ||
|     collected 8 items
 | ||
| 
 | ||
|     <Dir parametrize.rst-196>
 | ||
|       <Module test_time.py>
 | ||
|         <Function test_timedistance_v0[a0-b0-expected0]>
 | ||
|         <Function test_timedistance_v0[a1-b1-expected1]>
 | ||
|         <Function test_timedistance_v1[forward]>
 | ||
|         <Function test_timedistance_v1[backward]>
 | ||
|         <Function test_timedistance_v2[20011212-20011211-expected0]>
 | ||
|         <Function test_timedistance_v2[20011211-20011212-expected1]>
 | ||
|         <Function test_timedistance_v3[forward]>
 | ||
|         <Function test_timedistance_v3[backward]>
 | ||
| 
 | ||
|     ======================== 8 tests collected in 0.12s ========================
 | ||
| 
 | ||
| In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs
 | ||
| together with the actual data, instead of listing them separately.
 | ||
| 
 | ||
| A quick port of "testscenarios"
 | ||
| ------------------------------------
 | ||
| 
 | ||
| Here is a quick port to run tests configured with :pypi:`testscenarios`,
 | ||
| an add-on from Robert Collins for the standard unittest framework. We
 | ||
| only have to work a bit to construct the correct arguments for pytest's
 | ||
| :py:func:`Metafunc.parametrize <pytest.Metafunc.parametrize>`:
 | ||
| 
 | ||
| .. code-block:: python
 | ||
| 
 | ||
|     # content of test_scenarios.py
 | ||
| 
 | ||
| 
 | ||
|     def pytest_generate_tests(metafunc):
 | ||
|         idlist = []
 | ||
|         argvalues = []
 | ||
|         for scenario in metafunc.cls.scenarios:
 | ||
|             idlist.append(scenario[0])
 | ||
|             items = scenario[1].items()
 | ||
|             argnames = [x[0] for x in items]
 | ||
|             argvalues.append([x[1] for x in items])
 | ||
|         metafunc.parametrize(argnames, argvalues, ids=idlist, scope="class")
 | ||
| 
 | ||
| 
 | ||
|     scenario1 = ("basic", {"attribute": "value"})
 | ||
|     scenario2 = ("advanced", {"attribute": "value2"})
 | ||
| 
 | ||
| 
 | ||
|     class TestSampleWithScenarios:
 | ||
|         scenarios = [scenario1, scenario2]
 | ||
| 
 | ||
|         def test_demo1(self, attribute):
 | ||
|             assert isinstance(attribute, str)
 | ||
| 
 | ||
|         def test_demo2(self, attribute):
 | ||
|             assert isinstance(attribute, str)
 | ||
| 
 | ||
| this is a fully self-contained example which you can run with:
 | ||
| 
 | ||
| .. code-block:: pytest
 | ||
| 
 | ||
|     $ pytest test_scenarios.py
 | ||
|     =========================== test session starts ============================
 | ||
|     platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
 | ||
|     rootdir: /home/sweet/project
 | ||
|     collected 4 items
 | ||
| 
 | ||
|     test_scenarios.py ....                                               [100%]
 | ||
| 
 | ||
|     ============================ 4 passed in 0.12s =============================
 | ||
| 
 | ||
| If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:
 | ||
| 
 | ||
| .. code-block:: pytest
 | ||
| 
 | ||
|     $ pytest --collect-only test_scenarios.py
 | ||
|     =========================== test session starts ============================
 | ||
|     platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
 | ||
|     rootdir: /home/sweet/project
 | ||
|     collected 4 items
 | ||
| 
 | ||
|     <Dir parametrize.rst-196>
 | ||
|       <Module test_scenarios.py>
 | ||
|         <Class TestSampleWithScenarios>
 | ||
|           <Function test_demo1[basic]>
 | ||
|           <Function test_demo2[basic]>
 | ||
|           <Function test_demo1[advanced]>
 | ||
|           <Function test_demo2[advanced]>
 | ||
| 
 | ||
|     ======================== 4 tests collected in 0.12s ========================
 | ||
| 
 | ||
| Note that we told ``metafunc.parametrize()`` that your scenario values
 | ||
| should be considered class-scoped.  With pytest-2.3 this leads to a
 | ||
| resource-based ordering.
 | ||
| 
 | ||
| Deferring the setup of parametrized resources
 | ||
| ---------------------------------------------------
 | ||
| 
 | ||
| .. regendoc:wipe
 | ||
| 
 | ||
| The parametrization of test functions happens at collection
 | ||
| time.  It is a good idea to setup expensive resources like DB
 | ||
| connections or subprocess only when the actual test is run.
 | ||
| Here is a simple example how you can achieve that. This test
 | ||
| requires a ``db`` object fixture:
 | ||
| 
 | ||
| .. code-block:: python
 | ||
| 
 | ||
|     # content of test_backends.py
 | ||
| 
 | ||
|     import pytest
 | ||
| 
 | ||
| 
 | ||
|     def test_db_initialized(db):
 | ||
|         # a dummy test
 | ||
|         if db.__class__.__name__ == "DB2":
 | ||
|             pytest.fail("deliberately failing for demo purposes")
 | ||
| 
 | ||
| We can now add a test configuration that generates two invocations of
 | ||
| the ``test_db_initialized`` function and also implements a factory that
 | ||
| creates a database object for the actual test invocations:
 | ||
| 
 | ||
| .. code-block:: python
 | ||
| 
 | ||
|     # content of conftest.py
 | ||
|     import pytest
 | ||
| 
 | ||
| 
 | ||
|     def pytest_generate_tests(metafunc):
 | ||
|         if "db" in metafunc.fixturenames:
 | ||
|             metafunc.parametrize("db", ["d1", "d2"], indirect=True)
 | ||
| 
 | ||
| 
 | ||
|     class DB1:
 | ||
|         "one database object"
 | ||
| 
 | ||
| 
 | ||
|     class DB2:
 | ||
|         "alternative database object"
 | ||
| 
 | ||
| 
 | ||
|     @pytest.fixture
 | ||
|     def db(request):
 | ||
|         if request.param == "d1":
 | ||
|             return DB1()
 | ||
|         elif request.param == "d2":
 | ||
|             return DB2()
 | ||
|         else:
 | ||
|             raise ValueError("invalid internal test config")
 | ||
| 
 | ||
| Let's first see how it looks like at collection time:
 | ||
| 
 | ||
| .. code-block:: pytest
 | ||
| 
 | ||
|     $ pytest test_backends.py --collect-only
 | ||
|     =========================== test session starts ============================
 | ||
|     platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
 | ||
|     rootdir: /home/sweet/project
 | ||
|     collected 2 items
 | ||
| 
 | ||
|     <Dir parametrize.rst-196>
 | ||
|       <Module test_backends.py>
 | ||
|         <Function test_db_initialized[d1]>
 | ||
|         <Function test_db_initialized[d2]>
 | ||
| 
 | ||
|     ======================== 2 tests collected in 0.12s ========================
 | ||
| 
 | ||
| And then when we run the test:
 | ||
| 
 | ||
| .. code-block:: pytest
 | ||
| 
 | ||
|     $ pytest -q test_backends.py
 | ||
|     .F                                                                   [100%]
 | ||
|     ================================= FAILURES =================================
 | ||
|     _________________________ test_db_initialized[d2] __________________________
 | ||
| 
 | ||
|     db = <conftest.DB2 object at 0xdeadbeef0001>
 | ||
| 
 | ||
|         def test_db_initialized(db):
 | ||
|             # a dummy test
 | ||
|             if db.__class__.__name__ == "DB2":
 | ||
|     >           pytest.fail("deliberately failing for demo purposes")
 | ||
|     E           Failed: deliberately failing for demo purposes
 | ||
| 
 | ||
|     test_backends.py:8: Failed
 | ||
|     ========================= short test summary info ==========================
 | ||
|     FAILED test_backends.py::test_db_initialized[d2] - Failed: deliberately f...
 | ||
|     1 failed, 1 passed in 0.12s
 | ||
| 
 | ||
| The first invocation with ``db == "DB1"`` passed while the second with ``db == "DB2"`` failed.  Our ``db`` fixture function has instantiated each of the DB values during the setup phase while the ``pytest_generate_tests`` generated two according calls to the ``test_db_initialized`` during the collection phase.
 | ||
| 
 | ||
| Indirect parametrization
 | ||
| ---------------------------------------------------
 | ||
| 
 | ||
| Using the ``indirect=True`` parameter when parametrizing a test allows to
 | ||
| parametrize a test with a fixture receiving the values before passing them to a
 | ||
| test:
 | ||
| 
 | ||
| .. code-block:: python
 | ||
| 
 | ||
|     import pytest
 | ||
| 
 | ||
| 
 | ||
|     @pytest.fixture
 | ||
|     def fixt(request):
 | ||
|         return request.param * 3
 | ||
| 
 | ||
| 
 | ||
|     @pytest.mark.parametrize("fixt", ["a", "b"], indirect=True)
 | ||
|     def test_indirect(fixt):
 | ||
|         assert len(fixt) == 3
 | ||
| 
 | ||
| This can be used, for example, to do more expensive setup at test run time in
 | ||
| the fixture, rather than having to run those setup steps at collection time.
 | ||
| 
 | ||
| .. regendoc:wipe
 | ||
| 
 | ||
| Apply indirect on particular arguments
 | ||
| ---------------------------------------------------
 | ||
| 
 | ||
| Very often parametrization uses more than one argument name. There is opportunity to apply ``indirect``
 | ||
| parameter on particular arguments. It can be done by passing list or tuple of
 | ||
| arguments' names to ``indirect``. In the example below there is a function ``test_indirect`` which uses
 | ||
| two fixtures: ``x`` and ``y``. Here we give to indirect the list, which contains the name of the
 | ||
| fixture ``x``. The indirect parameter will be applied to this argument only, and the value ``a``
 | ||
| will be passed to respective fixture function:
 | ||
| 
 | ||
| .. code-block:: python
 | ||
| 
 | ||
|     # content of test_indirect_list.py
 | ||
| 
 | ||
|     import pytest
 | ||
| 
 | ||
| 
 | ||
|     @pytest.fixture(scope="function")
 | ||
|     def x(request):
 | ||
|         return request.param * 3
 | ||
| 
 | ||
| 
 | ||
|     @pytest.fixture(scope="function")
 | ||
|     def y(request):
 | ||
|         return request.param * 2
 | ||
| 
 | ||
| 
 | ||
|     @pytest.mark.parametrize("x, y", [("a", "b")], indirect=["x"])
 | ||
|     def test_indirect(x, y):
 | ||
|         assert x == "aaa"
 | ||
|         assert y == "b"
 | ||
| 
 | ||
| The result of this test will be successful:
 | ||
| 
 | ||
| .. code-block:: pytest
 | ||
| 
 | ||
|     $ pytest -v test_indirect_list.py
 | ||
|     =========================== test session starts ============================
 | ||
|     platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
 | ||
|     cachedir: .pytest_cache
 | ||
|     rootdir: /home/sweet/project
 | ||
|     collecting ... collected 1 item
 | ||
| 
 | ||
|     test_indirect_list.py::test_indirect[a-b] PASSED                     [100%]
 | ||
| 
 | ||
|     ============================ 1 passed in 0.12s =============================
 | ||
| 
 | ||
| .. regendoc:wipe
 | ||
| 
 | ||
| Parametrizing test methods through per-class configuration
 | ||
| --------------------------------------------------------------
 | ||
| 
 | ||
| .. _`unittest parametrizer`: https://github.com/testing-cabal/unittest-ext/blob/master/params.py
 | ||
| 
 | ||
| 
 | ||
| Here is an example ``pytest_generate_tests`` function implementing a
 | ||
| parametrization scheme similar to Michael Foord's `unittest
 | ||
| parametrizer`_ but in a lot less code:
 | ||
| 
 | ||
| .. code-block:: python
 | ||
| 
 | ||
|     # content of ./test_parametrize.py
 | ||
|     import pytest
 | ||
| 
 | ||
| 
 | ||
|     def pytest_generate_tests(metafunc):
 | ||
|         # called once per each test function
 | ||
|         funcarglist = metafunc.cls.params[metafunc.function.__name__]
 | ||
|         argnames = sorted(funcarglist[0])
 | ||
|         metafunc.parametrize(
 | ||
|             argnames, [[funcargs[name] for name in argnames] for funcargs in funcarglist]
 | ||
|         )
 | ||
| 
 | ||
| 
 | ||
|     class TestClass:
 | ||
|         # a map specifying multiple argument sets for a test method
 | ||
|         params = {
 | ||
|             "test_equals": [dict(a=1, b=2), dict(a=3, b=3)],
 | ||
|             "test_zerodivision": [dict(a=1, b=0)],
 | ||
|         }
 | ||
| 
 | ||
|         def test_equals(self, a, b):
 | ||
|             assert a == b
 | ||
| 
 | ||
|         def test_zerodivision(self, a, b):
 | ||
|             with pytest.raises(ZeroDivisionError):
 | ||
|                 a / b
 | ||
| 
 | ||
| Our test generator looks up a class-level definition which specifies which
 | ||
| argument sets to use for each test function.  Let's run it:
 | ||
| 
 | ||
| .. code-block:: pytest
 | ||
| 
 | ||
|     $ pytest -q
 | ||
|     F..                                                                  [100%]
 | ||
|     ================================= FAILURES =================================
 | ||
|     ________________________ TestClass.test_equals[1-2] ________________________
 | ||
| 
 | ||
|     self = <test_parametrize.TestClass object at 0xdeadbeef0002>, a = 1, b = 2
 | ||
| 
 | ||
|         def test_equals(self, a, b):
 | ||
|     >       assert a == b
 | ||
|     E       assert 1 == 2
 | ||
| 
 | ||
|     test_parametrize.py:21: AssertionError
 | ||
|     ========================= short test summary info ==========================
 | ||
|     FAILED test_parametrize.py::TestClass::test_equals[1-2] - assert 1 == 2
 | ||
|     1 failed, 2 passed in 0.12s
 | ||
| 
 | ||
| Parametrization with multiple fixtures
 | ||
| --------------------------------------
 | ||
| 
 | ||
| Here is a stripped down real-life example of using parametrized
 | ||
| testing for testing serialization of objects between different python
 | ||
| interpreters.  We define a ``test_basic_objects`` function which
 | ||
| is to be run with different sets of arguments for its three arguments:
 | ||
| 
 | ||
| * ``python1``: first python interpreter, run to pickle-dump an object to a file
 | ||
| * ``python2``: second interpreter, run to pickle-load an object from a file
 | ||
| * ``obj``: object to be dumped/loaded
 | ||
| 
 | ||
| .. literalinclude:: multipython.py
 | ||
| 
 | ||
| Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (3 interpreters times 3 interpreters times 3 objects to serialize/deserialize):
 | ||
| 
 | ||
| .. code-block:: pytest
 | ||
| 
 | ||
|    . $ pytest -rs -q multipython.py
 | ||
|    ssssssssssss...ssssssssssss                                          [100%]
 | ||
|    ========================= short test summary info ==========================
 | ||
|    SKIPPED [12] multipython.py:65: 'python3.9' not found
 | ||
|    SKIPPED [12] multipython.py:65: 'python3.11' not found
 | ||
|    3 passed, 24 skipped in 0.12s
 | ||
| 
 | ||
| Parametrization of optional implementations/imports
 | ||
| ---------------------------------------------------
 | ||
| 
 | ||
| If you want to compare the outcomes of several implementations of a given
 | ||
| API, you can write test functions that receive the already imported implementations
 | ||
| and get skipped in case the implementation is not importable/available.  Let's
 | ||
| say we have a "base" implementation and the other (possibly optimized ones)
 | ||
| need to provide similar results:
 | ||
| 
 | ||
| .. code-block:: python
 | ||
| 
 | ||
|     # content of conftest.py
 | ||
| 
 | ||
|     import pytest
 | ||
| 
 | ||
| 
 | ||
|     @pytest.fixture(scope="session")
 | ||
|     def basemod(request):
 | ||
|         return pytest.importorskip("base")
 | ||
| 
 | ||
| 
 | ||
|     @pytest.fixture(scope="session", params=["opt1", "opt2"])
 | ||
|     def optmod(request):
 | ||
|         return pytest.importorskip(request.param)
 | ||
| 
 | ||
| And then a base implementation of a simple function:
 | ||
| 
 | ||
| .. code-block:: python
 | ||
| 
 | ||
|     # content of base.py
 | ||
|     def func1():
 | ||
|         return 1
 | ||
| 
 | ||
| And an optimized version:
 | ||
| 
 | ||
| .. code-block:: python
 | ||
| 
 | ||
|     # content of opt1.py
 | ||
|     def func1():
 | ||
|         return 1.0001
 | ||
| 
 | ||
| And finally a little test module:
 | ||
| 
 | ||
| .. code-block:: python
 | ||
| 
 | ||
|     # content of test_module.py
 | ||
| 
 | ||
| 
 | ||
|     def test_func1(basemod, optmod):
 | ||
|         assert round(basemod.func1(), 3) == round(optmod.func1(), 3)
 | ||
| 
 | ||
| 
 | ||
| If you run this with reporting for skips enabled:
 | ||
| 
 | ||
| .. code-block:: pytest
 | ||
| 
 | ||
|     $ pytest -rs test_module.py
 | ||
|     =========================== test session starts ============================
 | ||
|     platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y
 | ||
|     rootdir: /home/sweet/project
 | ||
|     collected 2 items
 | ||
| 
 | ||
|     test_module.py .s                                                    [100%]
 | ||
| 
 | ||
|     ========================= short test summary info ==========================
 | ||
|     SKIPPED [1] test_module.py:3: could not import 'opt2': No module named 'opt2'
 | ||
|     ======================= 1 passed, 1 skipped in 0.12s =======================
 | ||
| 
 | ||
| You'll see that we don't have an ``opt2`` module and thus the second test run
 | ||
| of our ``test_func1`` was skipped.  A few notes:
 | ||
| 
 | ||
| - the fixture functions in the ``conftest.py`` file are "session-scoped" because we
 | ||
|   don't need to import more than once
 | ||
| 
 | ||
| - if you have multiple test functions and a skipped import, you will see
 | ||
|   the ``[1]`` count increasing in the report
 | ||
| 
 | ||
| - you can put :ref:`@pytest.mark.parametrize <@pytest.mark.parametrize>` style
 | ||
|   parametrization on the test functions to parametrize input/output
 | ||
|   values as well.
 | ||
| 
 | ||
| 
 | ||
| Set marks or test ID for individual parametrized test
 | ||
| --------------------------------------------------------------------
 | ||
| 
 | ||
| Use ``pytest.param`` to apply marks or set test ID to individual parametrized test.
 | ||
| For example:
 | ||
| 
 | ||
| .. code-block:: python
 | ||
| 
 | ||
|     # content of test_pytest_param_example.py
 | ||
|     import pytest
 | ||
| 
 | ||
| 
 | ||
|     @pytest.mark.parametrize(
 | ||
|         "test_input,expected",
 | ||
|         [
 | ||
|             ("3+5", 8),
 | ||
|             pytest.param("1+7", 8, marks=pytest.mark.basic),
 | ||
|             pytest.param("2+4", 6, marks=pytest.mark.basic, id="basic_2+4"),
 | ||
|             pytest.param(
 | ||
|                 "6*9", 42, marks=[pytest.mark.basic, pytest.mark.xfail], id="basic_6*9"
 | ||
|             ),
 | ||
|         ],
 | ||
|     )
 | ||
|     def test_eval(test_input, expected):
 | ||
|         assert eval(test_input) == expected
 | ||
| 
 | ||
| In this example, we have 4 parametrized tests. Except for the first test,
 | ||
| we mark the rest three parametrized tests with the custom marker ``basic``,
 | ||
| and for the fourth test we also use the built-in mark ``xfail`` to indicate this
 | ||
| test is expected to fail. For explicitness, we set test ids for some tests.
 | ||
| 
 | ||
| Then run ``pytest`` with verbose mode and with only the ``basic`` marker:
 | ||
| 
 | ||
| .. code-block:: pytest
 | ||
| 
 | ||
|     $ pytest -v -m basic
 | ||
|     =========================== test session starts ============================
 | ||
|     platform linux -- Python 3.x.y, pytest-8.x.y, pluggy-1.x.y -- $PYTHON_PREFIX/bin/python
 | ||
|     cachedir: .pytest_cache
 | ||
|     rootdir: /home/sweet/project
 | ||
|     collecting ... collected 24 items / 21 deselected / 3 selected
 | ||
| 
 | ||
|     test_pytest_param_example.py::test_eval[1+7-8] PASSED                [ 33%]
 | ||
|     test_pytest_param_example.py::test_eval[basic_2+4] PASSED            [ 66%]
 | ||
|     test_pytest_param_example.py::test_eval[basic_6*9] XFAIL             [100%]
 | ||
| 
 | ||
|     =============== 2 passed, 21 deselected, 1 xfailed in 0.12s ================
 | ||
| 
 | ||
| As the result:
 | ||
| 
 | ||
| - Four tests were collected
 | ||
| - One test was deselected because it doesn't have the ``basic`` mark.
 | ||
| - Three tests with the ``basic`` mark was selected.
 | ||
| - The test ``test_eval[1+7-8]`` passed, but the name is autogenerated and confusing.
 | ||
| - The test ``test_eval[basic_2+4]`` passed.
 | ||
| - The test ``test_eval[basic_6*9]`` was expected to fail and did fail.
 | ||
| 
 | ||
| .. _`parametrizing_conditional_raising`:
 | ||
| 
 | ||
| Parametrizing conditional raising
 | ||
| --------------------------------------------------------------------
 | ||
| 
 | ||
| Use :func:`pytest.raises` with the
 | ||
| :ref:`pytest.mark.parametrize ref` decorator to write parametrized tests
 | ||
| in which some tests raise exceptions and others do not.
 | ||
| 
 | ||
| ``contextlib.nullcontext`` can be used to test cases that are not expected to
 | ||
| raise exceptions but that should result in some value. The value is given as the
 | ||
| ``enter_result`` parameter, which will be available as the ``with`` statement’s
 | ||
| target (``e`` in the example below).
 | ||
| 
 | ||
| For example:
 | ||
| 
 | ||
| .. code-block:: python
 | ||
| 
 | ||
|     from contextlib import nullcontext
 | ||
| 
 | ||
|     import pytest
 | ||
| 
 | ||
| 
 | ||
|     @pytest.mark.parametrize(
 | ||
|         "example_input,expectation",
 | ||
|         [
 | ||
|             (3, nullcontext(2)),
 | ||
|             (2, nullcontext(3)),
 | ||
|             (1, nullcontext(6)),
 | ||
|             (0, pytest.raises(ZeroDivisionError)),
 | ||
|         ],
 | ||
|     )
 | ||
|     def test_division(example_input, expectation):
 | ||
|         """Test how much I know division."""
 | ||
|         with expectation as e:
 | ||
|             assert (6 / example_input) == e
 | ||
| 
 | ||
| In the example above, the first three test cases should run without any
 | ||
| exceptions, while the fourth should raise a``ZeroDivisionError`` exception,
 | ||
| which is expected by pytest.
 |