406 lines
		
	
	
		
			12 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
			
		
		
	
	
			406 lines
		
	
	
		
			12 KiB
		
	
	
	
		
			Plaintext
		
	
	
	
 | 
						|
.. highlightlang:: python
 | 
						|
 | 
						|
Basic patterns and examples
 | 
						|
==========================================================
 | 
						|
 | 
						|
Pass different values to a test function, depending on command line options
 | 
						|
----------------------------------------------------------------------------
 | 
						|
 | 
						|
.. regendoc:wipe
 | 
						|
 | 
						|
Suppose we want to write a test that depends on a command line option.
 | 
						|
Here is a basic pattern how to achieve this::
 | 
						|
 | 
						|
    # content of test_sample.py
 | 
						|
    def test_answer(cmdopt):
 | 
						|
        if cmdopt == "type1":
 | 
						|
            print ("first")
 | 
						|
        elif cmdopt == "type2":
 | 
						|
            print ("second")
 | 
						|
        assert 0 # to see what was printed
 | 
						|
 | 
						|
 | 
						|
For this to work we need to add a command line option and
 | 
						|
provide the ``cmdopt`` through a :ref:`fixture function <fixture function>`::
 | 
						|
 | 
						|
    # content of conftest.py
 | 
						|
    import pytest
 | 
						|
 | 
						|
    def pytest_addoption(parser):
 | 
						|
        parser.addoption("--cmdopt", action="store", default="type1",
 | 
						|
            help="my option: type1 or type2")
 | 
						|
 | 
						|
    @pytest.fixture
 | 
						|
    def cmdopt(request):
 | 
						|
        return request.config.option.cmdopt
 | 
						|
 | 
						|
Let's run this without supplying our new option::
 | 
						|
 | 
						|
    $ py.test -q test_sample.py
 | 
						|
    F
 | 
						|
    ================================= FAILURES =================================
 | 
						|
    _______________________________ test_answer ________________________________
 | 
						|
    
 | 
						|
    cmdopt = 'type1'
 | 
						|
    
 | 
						|
        def test_answer(cmdopt):
 | 
						|
            if cmdopt == "type1":
 | 
						|
                print ("first")
 | 
						|
            elif cmdopt == "type2":
 | 
						|
                print ("second")
 | 
						|
    >       assert 0 # to see what was printed
 | 
						|
    E       assert 0
 | 
						|
    
 | 
						|
    test_sample.py:6: AssertionError
 | 
						|
    ----------------------------- Captured stdout ------------------------------
 | 
						|
    first
 | 
						|
 | 
						|
And now with supplying a command line option::
 | 
						|
 | 
						|
    $ py.test -q --cmdopt=type2
 | 
						|
    F
 | 
						|
    ================================= FAILURES =================================
 | 
						|
    _______________________________ test_answer ________________________________
 | 
						|
    
 | 
						|
    cmdopt = 'type2'
 | 
						|
    
 | 
						|
        def test_answer(cmdopt):
 | 
						|
            if cmdopt == "type1":
 | 
						|
                print ("first")
 | 
						|
            elif cmdopt == "type2":
 | 
						|
                print ("second")
 | 
						|
    >       assert 0 # to see what was printed
 | 
						|
    E       assert 0
 | 
						|
    
 | 
						|
    test_sample.py:6: AssertionError
 | 
						|
    ----------------------------- Captured stdout ------------------------------
 | 
						|
    second
 | 
						|
 | 
						|
You can see that the command line option arrived in our test.  This
 | 
						|
completes the basic pattern.  However, one often rather wants to process
 | 
						|
command line options outside of the test and rather pass in different or
 | 
						|
more complex objects.
 | 
						|
 | 
						|
Dynamically adding command line options
 | 
						|
--------------------------------------------------------------
 | 
						|
 | 
						|
.. regendoc:wipe
 | 
						|
 | 
						|
Through :confval:`addopts` you can statically add command line
 | 
						|
options for your project.  You can also dynamically modify
 | 
						|
the command line arguments before they get processed::
 | 
						|
 | 
						|
    # content of conftest.py
 | 
						|
    import sys
 | 
						|
    def pytest_cmdline_preparse(args):
 | 
						|
        if 'xdist' in sys.modules: # pytest-xdist plugin
 | 
						|
            import multiprocessing
 | 
						|
            num = max(multiprocessing.cpu_count() / 2, 1)
 | 
						|
            args[:] = ["-n", str(num)] + args
 | 
						|
 | 
						|
If you have the :ref:`xdist plugin <xdist>` installed
 | 
						|
you will now always perform test runs using a number
 | 
						|
of subprocesses close to your CPU. Running in an empty
 | 
						|
directory with the above conftest.py::
 | 
						|
 | 
						|
    $ py.test
 | 
						|
    =========================== test session starts ============================
 | 
						|
    platform linux2 -- Python 2.7.3 -- pytest-2.3.1
 | 
						|
    collected 0 items
 | 
						|
    
 | 
						|
    =============================  in 0.00 seconds =============================
 | 
						|
 | 
						|
.. _`excontrolskip`:
 | 
						|
 | 
						|
Control skipping of tests according to command line option
 | 
						|
--------------------------------------------------------------
 | 
						|
 | 
						|
.. regendoc:wipe
 | 
						|
 | 
						|
Here is a ``conftest.py`` file adding a ``--runslow`` command
 | 
						|
line option to control skipping of ``slow`` marked tests::
 | 
						|
 | 
						|
    # content of conftest.py
 | 
						|
 | 
						|
    import pytest
 | 
						|
    def pytest_addoption(parser):
 | 
						|
        parser.addoption("--runslow", action="store_true",
 | 
						|
            help="run slow tests")
 | 
						|
 | 
						|
    def pytest_runtest_setup(item):
 | 
						|
        if 'slow' in item.keywords and not item.config.getvalue("runslow"):
 | 
						|
            pytest.skip("need --runslow option to run")
 | 
						|
 | 
						|
We can now write a test module like this::
 | 
						|
 | 
						|
    # content of test_module.py
 | 
						|
 | 
						|
    import pytest
 | 
						|
    slow = pytest.mark.slow
 | 
						|
 | 
						|
    def test_func_fast():
 | 
						|
        pass
 | 
						|
 | 
						|
    @slow
 | 
						|
    def test_func_slow():
 | 
						|
        pass
 | 
						|
 | 
						|
and when running it will see a skipped "slow" test::
 | 
						|
 | 
						|
    $ py.test -rs    # "-rs" means report details on the little 's'
 | 
						|
    =========================== test session starts ============================
 | 
						|
    platform linux2 -- Python 2.7.3 -- pytest-2.3.1
 | 
						|
    collected 2 items
 | 
						|
    
 | 
						|
    test_module.py .s
 | 
						|
    ========================= short test summary info ==========================
 | 
						|
    SKIP [1] /tmp/doc-exec-597/conftest.py:9: need --runslow option to run
 | 
						|
    
 | 
						|
    =================== 1 passed, 1 skipped in 0.01 seconds ====================
 | 
						|
 | 
						|
Or run it including the ``slow`` marked test::
 | 
						|
 | 
						|
    $ py.test --runslow
 | 
						|
    =========================== test session starts ============================
 | 
						|
    platform linux2 -- Python 2.7.3 -- pytest-2.3.1
 | 
						|
    collected 2 items
 | 
						|
    
 | 
						|
    test_module.py ..
 | 
						|
    
 | 
						|
    ========================= 2 passed in 0.01 seconds =========================
 | 
						|
 | 
						|
Writing well integrated assertion helpers
 | 
						|
--------------------------------------------------
 | 
						|
 | 
						|
.. regendoc:wipe
 | 
						|
 | 
						|
If you have a test helper function called from a test you can
 | 
						|
use the ``pytest.fail`` marker to fail a test with a certain message.
 | 
						|
The test support function will not show up in the traceback if you
 | 
						|
set the ``__tracebackhide__`` option somewhere in the helper function.
 | 
						|
Example::
 | 
						|
 | 
						|
    # content of test_checkconfig.py
 | 
						|
    import pytest
 | 
						|
    def checkconfig(x):
 | 
						|
        __tracebackhide__ = True
 | 
						|
        if not hasattr(x, "config"):
 | 
						|
            pytest.fail("not configured: %s" %(x,))
 | 
						|
 | 
						|
    def test_something():
 | 
						|
        checkconfig(42)
 | 
						|
 | 
						|
The ``__tracebackhide__`` setting influences py.test showing
 | 
						|
of tracebacks: the ``checkconfig`` function will not be shown
 | 
						|
unless the ``--fulltrace`` command line option is specified.
 | 
						|
Let's run our little function::
 | 
						|
 | 
						|
    $ py.test -q test_checkconfig.py
 | 
						|
    F
 | 
						|
    ================================= FAILURES =================================
 | 
						|
    ______________________________ test_something ______________________________
 | 
						|
    
 | 
						|
        def test_something():
 | 
						|
    >       checkconfig(42)
 | 
						|
    E       Failed: not configured: 42
 | 
						|
    
 | 
						|
    test_checkconfig.py:8: Failed
 | 
						|
 | 
						|
Detect if running from within a py.test run
 | 
						|
--------------------------------------------------------------
 | 
						|
 | 
						|
.. regendoc:wipe
 | 
						|
 | 
						|
Usually it is a bad idea to make application code
 | 
						|
behave differently if called from a test.  But if you
 | 
						|
absolutely must find out if your application code is
 | 
						|
running from a test you can do something like this::
 | 
						|
 | 
						|
    # content of conftest.py
 | 
						|
 | 
						|
    def pytest_configure(config):
 | 
						|
        import sys
 | 
						|
        sys._called_from_test = True
 | 
						|
 | 
						|
    def pytest_unconfigure(config):
 | 
						|
        del sys._called_from_test
 | 
						|
 | 
						|
and then check for the ``sys._called_from_test`` flag::
 | 
						|
 | 
						|
    if hasattr(sys, '_called_from_test'):
 | 
						|
        # called from within a test run
 | 
						|
    else:
 | 
						|
        # called "normally"
 | 
						|
 | 
						|
accordingly in your application.  It's also a good idea
 | 
						|
to use your own application module rather than ``sys``
 | 
						|
for handling flag.
 | 
						|
 | 
						|
Adding info to test report header
 | 
						|
--------------------------------------------------------------
 | 
						|
 | 
						|
.. regendoc:wipe
 | 
						|
 | 
						|
It's easy to present extra information in a py.test run::
 | 
						|
 | 
						|
    # content of conftest.py
 | 
						|
    
 | 
						|
    def pytest_report_header(config):
 | 
						|
        return "project deps: mylib-1.1"
 | 
						|
 | 
						|
which will add the string to the test header accordingly::
 | 
						|
 | 
						|
    $ py.test
 | 
						|
    =========================== test session starts ============================
 | 
						|
    platform linux2 -- Python 2.7.3 -- pytest-2.3.1
 | 
						|
    project deps: mylib-1.1
 | 
						|
    collected 0 items
 | 
						|
    
 | 
						|
    =============================  in 0.00 seconds =============================
 | 
						|
 | 
						|
.. regendoc:wipe
 | 
						|
 | 
						|
You can also return a list of strings which will be considered as several
 | 
						|
lines of information.  You can of course also make the amount of reporting
 | 
						|
information on e.g. the value of ``config.option.verbose`` so that
 | 
						|
you present more information appropriately::
 | 
						|
 | 
						|
    # content of conftest.py
 | 
						|
 | 
						|
    def pytest_report_header(config):
 | 
						|
        if config.option.verbose > 0:
 | 
						|
            return ["info1: did you know that ...", "did you?"]
 | 
						|
 | 
						|
which will add info only when run with "--v"::
 | 
						|
 | 
						|
    $ py.test -v
 | 
						|
    =========================== test session starts ============================
 | 
						|
    platform linux2 -- Python 2.7.3 -- pytest-2.3.1 -- /home/hpk/p/pytest/.tox/regen/bin/python
 | 
						|
    info1: did you know that ...
 | 
						|
    did you?
 | 
						|
    collecting ... collected 0 items
 | 
						|
    
 | 
						|
    =============================  in 0.00 seconds =============================
 | 
						|
 | 
						|
and nothing when run plainly::
 | 
						|
 | 
						|
    $ py.test
 | 
						|
    =========================== test session starts ============================
 | 
						|
    platform linux2 -- Python 2.7.3 -- pytest-2.3.1
 | 
						|
    collected 0 items
 | 
						|
    
 | 
						|
    =============================  in 0.00 seconds =============================
 | 
						|
 | 
						|
profiling test duration
 | 
						|
--------------------------
 | 
						|
 | 
						|
.. regendoc:wipe
 | 
						|
 | 
						|
.. versionadded: 2.2
 | 
						|
 | 
						|
If you have a slow running large test suite you might want to find
 | 
						|
out which tests are the slowest. Let's make an artifical test suite::
 | 
						|
 | 
						|
    # content of test_some_are_slow.py
 | 
						|
 | 
						|
    import time
 | 
						|
 | 
						|
    def test_funcfast():
 | 
						|
        pass
 | 
						|
 | 
						|
    def test_funcslow1():
 | 
						|
        time.sleep(0.1)
 | 
						|
 | 
						|
    def test_funcslow2():
 | 
						|
        time.sleep(0.2)
 | 
						|
 | 
						|
Now we can profile which test functions execute the slowest::
 | 
						|
 | 
						|
    $ py.test --durations=3
 | 
						|
    =========================== test session starts ============================
 | 
						|
    platform linux2 -- Python 2.7.3 -- pytest-2.3.1
 | 
						|
    collected 3 items
 | 
						|
    
 | 
						|
    test_some_are_slow.py ...
 | 
						|
    
 | 
						|
    ========================= slowest 3 test durations =========================
 | 
						|
    0.20s call     test_some_are_slow.py::test_funcslow2
 | 
						|
    0.10s call     test_some_are_slow.py::test_funcslow1
 | 
						|
    0.00s setup    test_some_are_slow.py::test_funcfast
 | 
						|
    ========================= 3 passed in 0.31 seconds =========================
 | 
						|
 | 
						|
incremental testing - test steps
 | 
						|
---------------------------------------------------
 | 
						|
 | 
						|
.. regendoc:wipe
 | 
						|
 | 
						|
Sometimes you may have a testing situation which consists of a series
 | 
						|
of test steps.  If one step fails it makes no sense to execute further
 | 
						|
steps as they are all expected to fail anyway and their tracebacks
 | 
						|
add no insight.  Here is a simple ``conftest.py`` file which introduces
 | 
						|
an ``incremental`` marker which is to be used on classes::
 | 
						|
 | 
						|
    # content of conftest.py
 | 
						|
 | 
						|
    import pytest
 | 
						|
 | 
						|
    def pytest_runtest_makereport(item, call):
 | 
						|
        if "incremental" in item.keywords:
 | 
						|
            if call.excinfo is not None:
 | 
						|
                parent = item.parent
 | 
						|
                parent._previousfailed = item
 | 
						|
 | 
						|
    def pytest_runtest_setup(item):
 | 
						|
        if "incremental" in item.keywords:
 | 
						|
            previousfailed = getattr(item.parent, "_previousfailed", None)
 | 
						|
            if previousfailed is not None:
 | 
						|
                pytest.xfail("previous test failed (%s)" %previousfailed.name)
 | 
						|
 | 
						|
These two hook implementations work together to abort incremental-marked
 | 
						|
tests in a class.  Here is a test module example::
 | 
						|
 | 
						|
    # content of test_step.py
 | 
						|
 | 
						|
    import pytest
 | 
						|
 | 
						|
    @pytest.mark.incremental
 | 
						|
    class TestUserHandling:
 | 
						|
        def test_login(self):
 | 
						|
            pass
 | 
						|
        def test_modification(self):
 | 
						|
            assert 0
 | 
						|
        def test_deletion(self):
 | 
						|
            pass
 | 
						|
 | 
						|
    def test_normal():
 | 
						|
        pass
 | 
						|
 | 
						|
If we run this::
 | 
						|
 | 
						|
    $ py.test -rx
 | 
						|
    =========================== test session starts ============================
 | 
						|
    platform linux2 -- Python 2.7.3 -- pytest-2.3.1
 | 
						|
    collected 4 items
 | 
						|
    
 | 
						|
    test_step.py .Fx.
 | 
						|
    
 | 
						|
    ================================= FAILURES =================================
 | 
						|
    ____________________ TestUserHandling.test_modification ____________________
 | 
						|
    
 | 
						|
    self = <test_step.TestUserHandling instance at 0x1aad680>
 | 
						|
    
 | 
						|
        def test_modification(self):
 | 
						|
    >       assert 0
 | 
						|
    E       assert 0
 | 
						|
    
 | 
						|
    test_step.py:9: AssertionError
 | 
						|
    ========================= short test summary info ==========================
 | 
						|
    XFAIL test_step.py::TestUserHandling::()::test_deletion
 | 
						|
      reason: previous test failed (test_modification)
 | 
						|
    ============== 1 failed, 2 passed, 1 xfailed in 0.01 seconds ===============
 | 
						|
 | 
						|
We'll see that ``test_deletion`` was not executed because ``test_modification``
 | 
						|
failed.  It is reported as an "expected failure".
 | 
						|
 |