Do not cause a SyntaxError for something like:
> DeprecationWarning: invalid escape sequence \w
This was happening via pdb++ when it imported pygments (and that had no
compiled .pyc file).
- use py27-pexpect,py27-trial,py27-numpy and py37-xdist in baseline,
using pexpect there catches errors with pdb tests early, and
py37-xdist is much faster than py37.
- move py34 and py36 out of baseline.
To keep existing tests which emit RemovedInPytest4Warnings running, decided
to go with a command line option because:
* Is harder to integrate an ini option with tests which already use an ini file
* It also marks tests which need to be removed/updated in 4.1, when
RemovedInPytest4Warning and related functionality are removed.
Fix#3737
Ref: https://github.com/pytest-dev/pytest/issues/4321#issuecomment-436951894
Hardens some of the not many tests affected by this:
1. `testing/test_session.py::test_rootdir_option_arg` displayed:
> root/test_rootdir_option_arg2/test_rootdir_option_arg.py
2. `test_cmdline_python_namespace_package` displayed "hello/" prefix for:
> hello/test_hello.py::test_hello
> hello/test_hello.py::test_other
`_collect_report_last_write` might be None, when `pytest_collection` was
not called before. Not sure if this indicates another problem, but it
can be reproduced with `testing/test_collection.py::TestCollector::()::test_getcustomfile_roundtrip`.
Fixes https://github.com/pytest-dev/pytest/issues/4329
This is relevant when using runpytest in-process.
Fixes:
E def test_1(testdir):
E testdir.runpytest()
E > __import__('pdb').set_trace()
E
E ../../test_trace_after_runpytest.py:3:
E …/Vcs/pytest/src/_pytest/debugging.py:81: in set_trace
E tw = _pytest.config.create_terminal_writer(cls._config)
E
E config = None, args = (), kwargs = {}, tw = <py._io.terminalwriter.TerminalWriter object at 0x7f1097088160>
E
E def create_terminal_writer(config, *args, **kwargs):
E """Create a TerminalWriter instance configured according to the options
E in the config object. Every code which requires a TerminalWriter object
E and has access to a config object should use this function.
E """
E tw = py.io.TerminalWriter(*args, **kwargs)
E > if config.option.color == "yes":
E E AttributeError: 'NoneType' object has no attribute 'option'
This removes the hack added in https://github.com/pytest-dev/pytest/pull/3802.
Adjusts test:
- it appears to not have been changed to 7 intentionally.
- removes XXX comment, likely not relevant anymore since 6dac7743.
Also renames `_path2confmods` to `_dirpath2confmods` for clarity (it is
expected to be a dirpath in `_importconftest`).
Uses an explicit maxsize, since it appears to be only relevant for a
short period [1].
Removes the lru_cache on _getconftest_pathlist, which makes no
difference when caching _getconftestmodules, at least with the
performance test of 100x10 files (#4237).
1: https://github.com/pytest-dev/pytest/pull/4237#discussion_r228528007
The type `bytes` is available on all supported Python versions. On
Python 2.7, it is an alias of str, same as six.binary_type.
Makes the code slightly more forward compatible.
This removes the hack added in https://github.com/pytest-dev/pytest/pull/3802.
Adjusts test:
- it appears to not have been changed to 7 intentionally.
- removes XXX comment, likely not relevant anymore since 6dac7743.
After `pdb.set_trace()` capturing is turned off.
This patch resumes it after using the `continue` (or `c` / `cont`)
command.
Store _pytest_capman on the class, for pdbpp's do_debug hack to keep it.
Without this, `debug …` would fail like this:
/usr/lib/python3.6/cmd.py:217: in onecmd
return func(arg)
.venv/lib/python3.6/site-packages/pdb.py:608: in do_debug
return orig_do_debug(self, arg)
/usr/lib/python3.6/pdb.py:1099: in do_debug
sys.call_tracing(p.run, (arg, globals, locals))
/usr/lib/python3.6/bdb.py:434: in run
exec(cmd, globals, locals)
/usr/lib/python3.6/bdb.py:51: in trace_dispatch
return self.dispatch_line(frame)
/usr/lib/python3.6/bdb.py:69: in dispatch_line
self.user_line(frame)
/usr/lib/python3.6/pdb.py:261: in user_line
self.interaction(frame, None)
.venv/lib/python3.6/site-packages/pdb.py:203: in interaction
self.setup(frame, traceback)
E AttributeError: 'PytestPdb' object has no attribute '_pytest_capman'
- add pytest_leave_pdb hook
- fixes test_pdb_interaction_capturing_twice: would fail on master now,
but works here
For pytest's own suite the `cache_info()` looks as follows:
> session.config._getconftest_pathlist.cache_info()
CacheInfo(hits=231, misses=19, maxsize=None, currsize=19)
While it does not really make a difference for me this might help with
larger test suites / the case mentioned in
https://github.com/pytest-dev/pytest/issues/2206#issuecomment-432623646.
This has been failing as of 2018-10-23 while installing gcc with
this message:
==> Installing numpy dependency: gcc
==> Downloading https://homebrew.bintray.com/bottles/gcc-8.2.0.high_sierra.bottl
######################################################################## 100.0%
==> Pouring gcc-8.2.0.high_sierra.bottle.1.tar.gz
Error: The `brew link` step did not complete successfully
The formula built, but is not symlinked into /usr/local
Could not symlink include/c++
Target /usr/local/include/c++
already exists. You may want to remove it:
rm '/usr/local/include/c++'
To force the link and overwrite all conflicting files:
brew link --overwrite gcc
To list all files that would be deleted:
brew link --overwrite --dry-run gcc
Possible conflicting files are:
/usr/local/include/c++ -> /usr/local/Caskroom/oclint/0.13.1,17.4.0/oclint-0.13.1/include/c++
Running `pytest -k doesnotmatch` on pytest's own tests takes ~3s with
Kitty terminal for me, but only ~1s with `-q`.
It also is faster with urxvt, but still takes 2.2s there.
This patch only calls `report_collect` every 0.1s, which is good enough
for reporting collection progress, and improves the time with both Kitty
and urxvt to ~1.2s for me.
Some points on the document work different in 3.9, plus changed the order
of the sections a bit to make more sense for users reading it for the first time.
Using string formatting with the raw escaped object lead to string evaluation
"<py._xmlgen.raw object>"
Format the unescaped string first, then use the XML escape method as a last step.
`help quit` in pdb says:
> Quit from the debugger. The program being executed is aborted.
But pytest would continue with the next tests, often making it necessary
to kill the pytest process when using `--pdb` and trying to cancel the
tests using `KeyboardInterrupt` / `Ctrl-C`.
This fixes running `pytest tests/test_foo.py::test_bar`, where `tests`
is a symlink to `project/app/tests`: previously
`project/app/conftest.py` would be ignored for fixtures then.
This gets printed by the terminal reporter already, and currently
results in the same error being displayed twice, e.g. when raising an
`Exception` manually from `pytest.debugging.pytest_exception_interact`.
`expect()` expects an regular expression, so "Pdb" is equivalent to
"(Pdb)".
But instead of escaping the parenthesis this patch removes them, to
allow for matching "(Pdb++)", too.
Our policy is to not deprecate features during bugfix releases, but in this
case I believe it makes sense as we are only documenting it as deprecated,
without issuing warnings which might potentially break test suites.
This will get the word out that hook implementers should not use this parameter
at all.
Fix#4036
This should increase coverage for subprocesses, where previously
`source` paths were used only from the config file, but not the initial
`--source` argument.
Unfortunately we need to get a `py.path.local` object to perform the fnmatch
operation, it is different from the standard `fnmatch` module because it
implements its own custom logic. So we need to use `py.path` to perform
the fnmatch for backward compatibility reasons.
Ideally we should be able to use a "pure path" in `pathlib` terms (a path
not bound to the file system), but we don't have those in pylib.
Fix#3973
Optionally raise an exception when parametrize collects no arguments.
Provide the name of the test causing the failure in the exception
message.
See: #3849
* test_lf_and_ff_obey_verbosity is no longer necessary because
test_lf_and_ff_prints_no_needless_message already checks if the proper messages
are displayed when -q is used.
* Improve test_lf_and_ff_prints_no_needless_message so we also check that
the correct message is displayed when there are failures to run
Given that our guidelines demand that the CI have already passed, it seems
wasteful to run all those jobs again for the exact same commit.
As discussed in https://github.com/pytest-dev/pytest/pull/3906#issuecomment-417094481,
this will skip the "test" stage when building a tag for deployment.
doctesting: remove changedir
With coverage 5 we could use COVERAGE_RCFILE to make it find the
.coveragerc, or we could add `--rcfile` to _PYTEST_TOX_COVERAGE_RUN, but
I've thought that this should not be the job that has to test if
`--pyargs` actually works.
- Skips pypy for coverage, reports only py37 to coveralls
- tox: allow for TOXENV=py37-coverage
- tracks coverage in subprocesses, using coverage-enable-subprocess, and
parallel=1
- removes usedevelop with doctesting to match `--source` being used with
coverage
- keep coveralls for now, used with AppVeyor
What happens is that atomic_write on Python 2.7 on Windows will try
to convert the paths to unicode, but this triggers the import of
the encoding module for the file system codec, which in turn triggers
the rewrite, which in turn again tries to import the module, and so on.
This short-circuits the cases where we try to import another file when
writing a pyc file; I don't expect this to affect anything because
the only modules that could be affected are those imported by
atomic_writes.
Fix#3506
I just noticed the newly committed code block doesn't format as a code block without `::` in the paragraph before. Perhaps doesn't make a difference, but also corrected 5 spaces to 4 which seems standard.
It seems pytest's very comprehensive CI sniffed out a few other places with similar bugs. Ideally we should find all the places where args are not stringy and solve it at the source, but who knows how many people are relying on the implicit string conversion. See [here](https://github.com/pytest-dev/pytest/blob/master/src/_pytest/config/__init__.py#L160-L166) for one such problem area (args with a single py.path.local instance is converted here, but a list or tuple containing some are not).
The problem was that _matchnodes would receive two items: [DoctestModule, Module]. It would then collect the first one, *cache it*, and fail to match against the name in the command line. Next, it would reuse the cached item (DoctestModule) instead of collecting the Module which would eventually find the "test" name on it.
Added the type of the node to the cache key to avoid this problem, although I'm not a big fan of caches that have different key types.
Fix#3843
Add a check before printing teardown logs.
'print_teardown_sections' method does not check '--show-capture' option
value, and teardown logs are always printed.
Resolves: #3816
If `PYTEST_DISABLE_PLUGIN_AUTOLOAD` is set, disable auto-loading of
plugins through setuptools entrypoints. Only plugins that have been
explicitly specified are loaded.
ref #3784.
This refactors the code so we have the real function object right during
collection. This avoids having to unwrap it later and lose attached information
such as "async" functions.
Fix#3747
This will now issue a RemovedInPytest4Warning when the user calls
a fixture function directly, instead of requesting it from test
functions as is expected
Fix#3661
- add the missing --last-failed argument in the presented examples, for they are missleading and lead to think that the missing argument is not needed.
Example:
Given this hierarchy: p1.s1.s2.s3
I want to run pytest p1/s1/s2/s3/foo.py
If there are any package fixtures defined at p1..s2 levels, they should also be executed.
@@ -3,7 +3,7 @@ Thanks for submitting a PR, your contribution is really appreciated!
Here's a quick checklist that should be present in PRs (you can delete this text from the final description, this is
just a guideline):
- [ ] Create a new changelog file in the `changelog` folder, with a name like `<ISSUE NUMBER>.<TYPE>.rst`. See [changelog/README.rst](/changelog/README.rst) for details.
- [ ] Create a new changelog file in the `changelog` folder, with a name like `<ISSUE NUMBER>.<TYPE>.rst`. See [changelog/README.rst](https://github.com/pytest-dev/pytest/blob/master/changelog/README.rst) for details.
- [ ] Target the `master` branch for bug fixes, documentation updates and trivial changes.
- [ ] Target the `features` branch for new features and removals/deprecations.
- [ ] Include documentation when adding new features.
#. Follow **PEP-8** for naming and `black <https://github.com/ambv/black>`_ for formatting.
#. Tests are run using ``tox``::
tox -e linting,py27,py36
tox -e linting,py27,py37
The test environments above are usually enough to cover most cases locally.
@@ -237,12 +237,12 @@ Here is a simple overview, with pytest-specific bits:
#. Run all the tests
You need to have Python 2.7 and 3.6 available in your system. Now
You need to have Python 2.7 and 3.7 available in your system. Now
running tests is as simple as issuing this command::
$ tox -e linting,py27,py36
$ tox -e linting,py27,py37
This command will run tests via the "tox" tool against Python 2.7 and 3.6
This command will run tests via the "tox" tool against Python 2.7 and 3.7
and also perform "lint" coding-style checks.
#. You can now edit your local working copy and run the tests again as necessary. Please follow PEP-8 for naming.
@@ -252,9 +252,9 @@ Here is a simple overview, with pytest-specific bits:
$ tox -e py27 -- --pdb
Or to only run tests in a particular test module on Python 3.6::
Or to only run tests in a particular test module on Python 3.7::
$ tox -e py36 -- testing/test_config.py
$ tox -e py37 -- testing/test_config.py
When committing, ``pre-commit`` will re-format the files if necessary.
@@ -280,6 +280,47 @@ Here is a simple overview, with pytest-specific bits:
base: features # if it's a feature
Writing Tests
----------------------------
Writing tests for plugins or for pytest itself is often done using the `testdir fixture <https://docs.pytest.org/en/latest/reference.html#testdir>`_, as a "black-box" test.
For example, to ensure a simple test passes you can write:
..code-block::python
deftest_true_assertion(testdir):
testdir.makepyfile(
"""
def test_foo():
assert True
"""
)
result=testdir.runpytest()
result.assert_outcomes(failed=0,passed=1)
Alternatively, it is possible to make checks based on the actual output of the termal using
========================== 1 failed in 0.04 seconds ===========================
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used. See `getting-started <http://docs.pytest.org/en/latest/getting-started.html#our-first-test-run>`_ for more examples.
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used. See `getting-started <https://docs.pytest.org/en/latest/getting-started.html#our-first-test-run>`_ for more examples.
Features
--------
- Detailed info on failing `assert statements <http://docs.pytest.org/en/latest/assert.html>`_ (no need to remember ``self.assert*`` names);
- Detailed info on failing `assert statements <https://docs.pytest.org/en/latest/assert.html>`_ (no need to remember ``self.assert*`` names);
{% for text, values in sections[section][category]|dictsort(by='value') %}
{% set issue_joiner = joiner(', ') %}
-{{ text }}{% if category != 'vendor' %} ({% for value in values|sort %}{{ issue_joiner() }}`{{ value }} <https://github.com/pytest-dev/pytest/issues/{{ value[1:] }}>`_{% endfor %}){% endif %}
- {% for value in values|sort %}{{ issue_joiner() }}`{{ value }} <https://github.com/pytest-dev/pytest/issues/{{ value[1:] }}>`_{% endfor %}: {{ text }}
@@ -7,14 +7,16 @@ Keeping backwards compatibility has a very high priority in the pytest project.
With the pytest 3.0 release we introduced a clear communication scheme for when we will actually remove the old busted joint and politely ask you to use the new hotness instead, while giving you enough time to adjust your tests or raise concerns if there are valid reasons to keep deprecated functionality around.
To communicate changes we are already issuing deprecation warnings, but they are not displayed by default. In pytest 3.0 we changed the default setting so that pytest deprecation warnings are displayed if not explicitly silenced (with ``--disable-pytest-warnings``).
To communicate changes we issue deprecation warnings using a custom warning hierarchy (see :ref:`internal-warnings`). These warnings may be suppressed using the standard means: ``-W`` command-line flag or ``filterwarnings`` ini options (see :ref:`warnings`), but we suggest to use these sparingly and temporarily, and heed the warnings when possible.
We will only remove deprecated functionality in major releases (e.g. if we deprecate something in 3.0 we will remove it in 4.0), and keep it around for at least two minor releases (e.g. if we deprecate something in 3.9 and 4.0 is the next release, we will not remove it in 4.0 but in 5.0).
We will only start the removal of deprecated functionality in major releases (e.g. if we deprecate something in 3.0 we will start to remove it in 4.0), and keep it around for at least two minor releases (e.g. if we deprecate something in 3.9 and 4.0 is the next release, we start to remove it in 5.0, not in 4.0).
When the deprecation expires (e.g. 4.0 is released), we won't remove the deprecated functionality immediately, but will use the standard warning filters to turn them into **errors** by default. This approach makes it explicit that removal is imminent, and still gives you time to turn the deprecated feature into a warning instead of an error so it can be dealt with in your own time. In the next minor release (e.g. 4.1), the feature will be effectively removed.
Deprecation Roadmap
-------------------
We track deprecation and removal of features using milestones and the `deprecation <https://github.com/pytest-dev/pytest/issues?q=label%3A%22type%3A+deprecation%22>`_ and `removal <https://github.com/pytest-dev/pytest/labels/type%3A%20removal>`_ labels on GitHub.
Features currently deprecated and removed in previous releases can be found in :ref:`deprecations`.
Following our deprecation policy, after starting issuing deprecation warnings we keep features for *at least* two minor versions before considering removal.
We track future deprecation and removal of features using milestones and the `deprecation <https://github.com/pytest-dev/pytest/issues?q=label%3A%22type%3A+deprecation%22>`_ and `removal <https://github.com/pytest-dev/pytest/labels/type%3A%20removal>`_ labels on GitHub.
@@ -12,7 +12,9 @@ For information on plugin hooks and objects, see :ref:`plugins`.
For information on the ``pytest.mark`` mechanism, see :ref:`mark`.
For information about fixtures, see :ref:`fixtures`. To see a complete list of available fixtures (add ``-v`` to also see fixtures with leading ``_``), type ::
For information about fixtures, see :ref:`fixtures`. To see a complete list of available fixtures (add ``-v`` to also see fixtures with leading ``_``), type :
..code-block::pytest
$ pytest -q --fixtures
cache
@@ -75,7 +77,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
caplog
Access and control log capturing.
Captured logs are available through the following methods::
Captured logs are available through the following properties/methods::
@@ -244,6 +256,8 @@ You can always peek at the content of the cache using the
{'test_caching.py::test_function': True}
cache/nodeids contains:
['test_caching.py::test_function']
cache/stepwise contains:
[]
example/value contains:
42
@@ -260,3 +274,9 @@ by adding the ``--cache-clear`` option like this::
This is recommended for invocations from Continuous Integration
servers where isolation and correctness is more important
than speed.
Stepwise
--------
As an alternative to ``--lf -x``, especially for cases where you expect a large part of the test suite will fail, ``--sw``, ``--stepwise`` allows you to fix them one at a time. The test suite will run until the first failure and then stop. At the next invocation, tests will continue from the last failing test and then run until the next failing test. You may use the ``--stepwise-skip`` option to ignore one failing test and stop the test execution on the second failing test instead. This is useful if you get stuck on a failing test and just want to ignore it until later.
*``node.warn(PytestWarning("some message"))``: is now the **recommended** way to call this function.
The warning instance must be a PytestWarning or subclass.
*``node.warn("CI", "some message")``: this code/message form is now **deprecated** and should be converted to the warning instance form above.
``pytest_namespace``
~~~~~~~~~~~~~~~~~~~~
..deprecated:: 3.7
This hook is deprecated because it greatly complicates the pytest internals regarding configuration and initialization, making some
bug fixes and refactorings impossible.
Example of usage:
..code-block::python
classMySymbol:
...
defpytest_namespace():
return{"my_symbol":MySymbol()}
Plugin authors relying on this hook should instead require that users now import the plugin modules directly (with an appropriate public API).
As a stopgap measure, plugin authors may still inject their names into pytest's namespace, usually during ``pytest_configure``:
..code-block::python
importpytest
defpytest_configure():
pytest.my_symbol=MySymbol()
Calling fixtures directly
~~~~~~~~~~~~~~~~~~~~~~~~~
..deprecated:: 3.7
Calling a fixture function directly, as opposed to request them in a test function, is deprecated.
For example:
..code-block::python
@pytest.fixture
defcell():
return...
@pytest.fixture
deffull_cell():
cell=cell()
cell.make_full()
returncell
This is a great source of confusion to new users, which will often call the fixture functions and request them from test functions interchangeably, which breaks the fixture resolution model.
In those cases just request the function directly in the dependent fixture:
..code-block::python
@pytest.fixture
defcell():
return...
@pytest.fixture
deffull_cell(cell):
cell.make_full()
returncell
``Node.get_marker``
~~~~~~~~~~~~~~~~~~~
..deprecated:: 3.6
As part of a large :ref:`marker-revamp`, :meth:`_pytest.nodes.Node.get_marker` is deprecated. See
:ref:`the documentation <update marker code>` on tips on how to update your code.
record_xml_property
~~~~~~~~~~~~~~~~~~~
..deprecated:: 3.5
The ``record_xml_property`` fixture is now deprecated in favor of the more generic ``record_property``, which
can be used by other consumers (for example ``pytest-html``) to obtain custom information about the test run.
This is just a matter of renaming the fixture as the API is the same:
..code-block::python
deftest_foo(record_xml_property):
...
Change to:
..code-block::python
deftest_foo(record_property):
...
pytest_plugins in non-top-level conftest files
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
..deprecated:: 3.5
Defining ``pytest_plugins`` is now deprecated in non-top-level conftest.py
files because they will activate referenced plugins *globally*, which is surprising because for all other pytest
features ``conftest.py`` files are only *active* for tests at or below it.
Metafunc.addcall
~~~~~~~~~~~~~~~~
..deprecated:: 3.3
:meth:`_pytest.python.Metafunc.addcall` was a precursor to the current parametrized mechanism. Users should use
========================= 2 passed in 0.12 seconds =========================
.._node-id:
@@ -124,11 +134,13 @@ Using ``-k expr`` to select tests based on their name
You can use the ``-k`` command line option to specify an expression
which implements a substring match on the test names instead of the
exact match on markers that ``-m`` provides. This makes it easy to
select tests based on their names::
select tests based on their names:
..code-block::pytest
$ pytest -v -k http # running with the above defined example module
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 3 deselected
@@ -137,11 +149,13 @@ select tests based on their names::
================== 1 passed, 3 deselected in 0.12 seconds ==================
And you can also run all tests except the ones that match the keyword::
And you can also run all tests except the ones that match the keyword:
..code-block::pytest
$ pytest -k "not send_http" -v
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 1 deselected
@@ -152,11 +166,13 @@ And you can also run all tests except the ones that match the keyword::
================== 3 passed, 1 deselected in 0.12 seconds ==================
Or to select "http" and "quick" tests::
Or to select "http" and "quick" tests:
..code-block::pytest
$ pytest -k "http or quick" -v
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.6
cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items / 2 deselected
@@ -200,15 +216,17 @@ You can ask which markers exist for your test suite - the list includes our just
$ pytest --markers
@pytest.mark.webtest: mark a test as a webtest.
@pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see https://docs.pytest.org/en/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See https://docs.pytest.org/en/latest/skipping.html
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see https://docs.pytest.org/en/latest/parametrize.html for more info and examples.
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see https://docs.pytest.org/en/latest/fixture.html#usefixtures
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
@@ -269,8 +287,12 @@ You can also set a module level marker::
import pytest
pytestmark = pytest.mark.webtest
in which case it will be applied to all functions and
in which case markers will be applied (in left-to-right order) to
all functions and methods defined in the module.
.._`marking individual tests when using parametrize`:
@@ -345,11 +367,13 @@ A test file using this local plugin::
pass
and an example invocations specifying a different environment than what
the test needs::
the test needs:
..code-block::pytest
$ pytest -E stage2
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
@@ -357,11 +381,13 @@ the test needs::
======================== 1 skipped in 0.12 seconds =========================
and here is one that specifies exactly the environment needed::
and here is one that specifies exactly the environment needed:
..code-block::pytest
$ pytest -E stage1
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
@@ -374,15 +400,17 @@ The ``--markers`` option always gives you a list of available markers::
$ pytest --markers
@pytest.mark.env(name): mark test to run only on named environment
@pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see https://docs.pytest.org/en/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See https://docs.pytest.org/en/latest/skipping.html
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see https://docs.pytest.org/en/latest/parametrize.html for more info and examples.
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see https://docs.pytest.org/en/latest/fixture.html#usefixtures
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
@@ -420,7 +448,9 @@ However, if there is a callable as the single positional argument with no keywor
def test_with_args():
pass
The output is as follows::
The output is as follows:
..code-block::pytest
$ pytest -q -s
Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef>,), kwargs={})
@@ -458,10 +488,12 @@ test function. From a conftest file we can read it like this::
@@ -42,14 +42,18 @@ Now we add a test configuration like this::
end = 2
metafunc.parametrize("param1", range(end))
This means that we only run 2 tests if we do not pass ``--all``::
This means that we only run 2 tests if we do not pass ``--all``:
..code-block::pytest
$ pytest -q test_compute.py
.. [100%]
2 passed in 0.12 seconds
We run only two computations, so we see two dots.
let's run the full monty::
let's run the full monty:
..code-block::pytest
$ pytest -q --all
....F [100%]
@@ -134,12 +138,13 @@ used as the test IDs. These are succinct, but can be a pain to maintain.
In ``test_timedistance_v2``, we specified ``ids`` as a function that can generate a
string representation to make part of the test ID. So our ``datetime`` values use the
label generated by ``idfn``, but because we didn't generate a label for ``timedelta``
objects, they are still using the default pytest representation::
objects, they are still using the default pytest representation:
..code-block::pytest
$ pytest test_time.py --collect-only
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 8 items
<Module 'test_time.py'>
@@ -191,11 +196,13 @@ only have to work a bit to construct the correct arguments for pytest's
def test_demo2(self, attribute):
assert isinstance(attribute, str)
this is a fully self-contained example which you can run with::
this is a fully self-contained example which you can run with:
..code-block::pytest
$ pytest test_scenarios.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items
@@ -203,17 +210,17 @@ this is a fully self-contained example which you can run with::
========================= 4 passed in 0.12 seconds =========================
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:
..code-block::pytest
$ pytest --collect-only test_scenarios.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 4 items
<Module 'test_scenarios.py'>
<Class 'TestSampleWithScenarios'>
<Instance '()'>
<Function 'test_demo1[basic]'>
<Function 'test_demo2[basic]'>
<Function 'test_demo1[advanced]'>
@@ -269,11 +276,13 @@ creates a database object for the actual test invocations::
else:
raise ValueError("invalid internal test config")
Let's first see how it looks like at collection time::
Let's first see how it looks like at collection time:
..code-block::pytest
$ pytest test_backends.py --collect-only
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 2 items
<Module 'test_backends.py'>
@@ -282,7 +291,9 @@ Let's first see how it looks like at collection time::
======================= no tests ran in 0.12 seconds =======================
And then when we run the test::
And then when we run the test:
..code-block::pytest
$ pytest -q test_backends.py
.F [100%]
@@ -330,11 +341,13 @@ will be passed to respective fixture function::
assert x == 'aaa'
assert y == 'b'
The result of this test will be successful::
The result of this test will be successful:
..code-block::pytest
$ pytest test_indirect_list.py --collect-only
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
<Module 'test_indirect_list.py'>
@@ -378,7 +391,9 @@ parametrizer`_ but in a lot less code::
pytest.raises(ZeroDivisionError, "a/b")
Our test generator looks up a class-level definition which specifies which
argument sets to use for each test function. Let's run it::
argument sets to use for each test function. Let's run it:
..code-block::pytest
$ pytest -q
F.. [100%]
@@ -408,12 +423,14 @@ is to be run with different sets of arguments for its three arguments:
..literalinclude:: multipython.py
Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize)::
Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize):
..code-block::pytest
. $ pytest -rs -q multipython.py
...sss...sssssssss...sss... [100%]
========================= short test summary info ==========================
SKIP [15] $REGENDOC_TMPDIR/CWD/multipython.py:28: 'python3.4' not found
SKIP [15] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.4' not found
12 passed, 15 skipped in 0.12 seconds
Indirect parametrization of optional implementations/imports
@@ -457,11 +474,13 @@ And finally a little test module::
A "flaky" test is one that exhibits intermittent or sporadic failure, that seems to have non-deterministic behaviour. Sometimes it passes, sometimes it fails, and it's not clear why. This page discusses pytest features that can help and other general strategies for identifying, fixing or mitigating them.
Why flaky tests are a problem
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Flaky tests are particularly troublesome when a continuous integration (CI) server is being used, so that all tests must pass before a new code change can be merged. If the test result is not a reliable signal -- that a test failure means the code change broke the test -- developers can become mistrustful of the test results, which can lead to overlooking genuine failures. It is also a source of wasted time as developers must re-run test suites and investigate spurious failures.
Potential root causes
^^^^^^^^^^^^^^^^^^^^^
System state
~~~~~~~~~~~~
Broadly speaking, a flaky test indicates that the test relies on some system state that is not being appropriately controlled - the test environment is not sufficiently isolated. Higher level tests are more likely to be flaky as they rely on more state.
Flaky tests sometimes appear when a test suite is run in parallel (such as use of pytest-xdist). This can indicate a test is reliant on test ordering.
- Perhaps a different test is failing to clean up after itself and leaving behind data which causes the flaky test to fail.
- The flaky test is reliant on data from a previous test that doesn't clean up after itself, and in parallel runs that previous test is not always present
- Tests that modify global state typically cannot be run in parallel.
Overly strict assertion
~~~~~~~~~~~~~~~~~~~~~~~
Overly strict assertions can cause problems with floating point comparison as well as timing issues. `pytest.approx <https://docs.pytest.org/en/latest/reference.html#pytest-approx>`_ is useful here.
Pytest features
^^^^^^^^^^^^^^^
Xfail strict
~~~~~~~~~~~~
:ref:`pytest.mark.xfail ref` with ``strict=False`` can be used to mark a test so that its failure does not cause the whole build to break. This could be considered like a manual quarantine, and is rather dangerous to use permanently.
PYTEST_CURRENT_TEST
~~~~~~~~~~~~~~~~~~~
:ref:`pytest current test env` may be useful for figuring out "which test got stuck".
Plugins
~~~~~~~
Rerunning any failed tests can mitigate the negative effects of flaky tests by giving them additional chances to pass, so that the overall build does not fail. Several pytest plugins support this:
*`flaky <https://github.com/box/flaky>`_
*`pytest-flakefinder <https://github.com/dropbox/pytest-flakefinder>`_ - `blog post <https://blogs.dropbox.com/tech/2016/03/open-sourcing-pytest-tools/>`_
It can be common to split a single test suite into two, such as unit vs integration, and only use the unit test suite as a CI gate. This also helps keep build times manageable as high level tests tend to be slower. However, it means it does become possible for code that breaks the build to be merged, so extra vigilance is needed for monitoring the integration test results.
Video/screenshot on failure
~~~~~~~~~~~~~~~~~~~~~~~~~~~
For UI tests these are important for understanding what the state of the UI was when the test failed. pytest-splinter can be used with plugins like pytest-bdd and can `save a screenshot on test failure <https://pytest-splinter.readthedocs.io/en/latest/#automatic-screenshots-on-test-failure>`_, which can help to isolate the cause.
Delete or rewrite the test
~~~~~~~~~~~~~~~~~~~~~~~~~~
If the functionality is covered by other tests, perhaps the test can be removed. If not, perhaps it can be rewritten at a lower level which will remove the flakiness or make its source more apparent.
Quarantine
~~~~~~~~~~
Mark Lapierre discusses the `Pros and Cons of Quarantined Tests <https://dev.to/mlapierre/pros-and-cons-of-quarantined-tests-2emj>`_ in a post from 2018.
CI tools that rerun on failure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Azure Pipelines (the Azure cloud CI/CD tool, formerly Visual Studio Team Services or VSTS) has a feature to `identify flaky tests <https://docs.microsoft.com/en-us/azure/devops/release-notes/2017/dec-11-vsts#identify-flaky-tests>`_ and rerun failed tests.
Research
^^^^^^^^
This is a limited list, please submit an issue or pull request to expand it!
* Gao, Zebao, Yalan Liang, Myra B. Cohen, Atif M. Memon, and Zhen Wang. "Making system user interactive tests repeatable: When and what should we control?." In *Software Engineering (ICSE), 2015 IEEE/ACM 37th IEEE International Conference on*, vol. 1, pp. 55-65. IEEE, 2015. `PDF <http://www.cs.umd.edu/~atif/pubs/gao-icse15.pdf>`__
* Palomba, Fabio, and Andy Zaidman. "Does refactoring of test smells induce fixing flaky tests?." In *Software Maintenance and Evolution (ICSME), 2017 IEEE International Conference on*, pp. 1-12. IEEE, 2017. `PDF in Google Drive <https://drive.google.com/file/d/10HdcCQiuQVgW3yYUJD-TSTq1NbYEprl0/view>`__
* Bell, Jonathan, Owolabi Legunsen, Michael Hilton, Lamyaa Eloussi, Tifany Yung, and Darko Marinov. "DeFlaker: Automatically detecting flaky tests." In *Proceedings of the 2018 International Conference on Software Engineering*. 2018. `PDF <https://www.jonbell.net/icse18-deflaker.pdf>`__
Resources
^^^^^^^^^
*`Eradicating Non-Determinism in Tests <https://martinfowler.com/articles/nonDeterminism.html>`_ by Martin Fowler, 2011
*`No more flaky tests on the Go team <https://www.thoughtworks.com/insights/blog/no-more-flaky-tests-go-team>`_ by Pavan Sudarshan, 2012
*`The Build That Cried Broken: Building Trust in your Continuous Integration Tests <https://www.youtube.com/embed/VotJqV4n8ig>`_ talk (video) by `Angie Jones <http://angiejones.tech/>`_ at SeleniumConf Austin 2017
*`Test and Code Podcast: Flaky Tests and How to Deal with Them <https://testandcode.com/50>`_ by Brian Okken and Anthony Shaw, 2018
* Microsoft:
*`How we approach testing VSTS to enable continuous delivery <https://blogs.msdn.microsoft.com/bharry/2017/06/28/testing-in-a-cloud-delivery-cadence/>`_ by Brian Harry MS, 2017
*`Eliminating Flaky Tests <https://docs.microsoft.com/en-us/azure/devops/learn/devops-at-microsoft/eliminating-flaky-tests>`_ blog and talk (video) by Munil Shah, 2017
* Google:
* `Flaky Tests at Google and How We Mitigate Them <https://testing.googleblog.com/2016/05/flaky-tests-at-google-and-how-we.html>`_ by John Micco, 2016
*`Where do Google's flaky tests come from? <https://docs.google.com/document/d/1mZ0-Kc97DI_F3tf_GBW_NB_aqka-P1jVOsFfufxqUUM/edit#heading=h.ec0r4fypsleh>`_ by Jeff Listfield, 2017
This is pytest version 3.x.y, imported from $PYTHON_PREFIX/lib/python3.5/site-packages/pytest.py
This is pytest version 4.x.y, imported from $PYTHON_PREFIX/lib/python3.6/site-packages/pytest.py
.._`simpletest`:
@@ -43,11 +43,13 @@ Create a simple test function with just four lines of code::
def test_answer():
assert func(3) == 5
That’s it. You can now execute the test function::
That’s it. You can now execute the test function:
..code-block::pytest
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
collected 1 item
@@ -90,7 +92,9 @@ Use the ``raises`` helper to assert that some code raises an exception::
with pytest.raises(SystemExit):
f()
Execute the test function with “quiet” reporting mode::
Execute the test function with “quiet” reporting mode:
..code-block::pytest
$ pytest -q test_sysexit.py
. [100%]
@@ -111,7 +115,9 @@ Once you develop multiple tests, you may want to group them into a class. pytest
x = "hello"
assert hasattr(x, 'check')
``pytest`` discovers all tests following its :ref:`Conventions for Python test discovery <test discovery>`, so it finds both ``test_`` prefixed functions. There is no need to subclass anything. We can simply run the module by passing its filename::
``pytest`` discovers all tests following its :ref:`Conventions for Python test discovery <test discovery>`, so it finds both ``test_`` prefixed functions. There is no need to subclass anything. We can simply run the module by passing its filename:
..code-block::pytest
$ pytest -q test_class.py
.F [100%]
@@ -138,10 +144,12 @@ Request a unique temporary directory for functional tests
# content of test_tmpdir.py
def test_needsfiles(tmpdir):
print(tmpdir)
print(tmpdir)
assert 0
List the name ``tmpdir`` in the test function signature and ``pytest`` will lookup and call a fixture factory to create the resource before performing the test function call. Before the test runs, ``pytest`` creates a unique-per-test-invocation temporary directory::
List the name ``tmpdir`` in the test function signature and ``pytest`` will lookup and call a fixture factory to create the resource before performing the test function call. Before the test runs, ``pytest`` creates a unique-per-test-invocation temporary directory:
..code-block::pytest
$ pytest -q test_tmpdir.py
F [100%]
@@ -151,7 +159,7 @@ List the name ``tmpdir`` in the test function signature and ``pytest`` will look
- pytest: recommendations, basic packages for testing in Python and Django, Andreu Vallbona, PyconES 2017 (`slides in english <http://talks.apsl.io/testing-pycones-2017/>`_, `video in spanish <https://www.youtube.com/watch?v=K20GeR-lXDk>`_)
- `Pythonic testing, Igor Starikov (Russian, PyNsk, November 2016)
``pytest`` allows one to drop into the PDB_ prompt immediately at the start of each test via a command line option::
pytest --trace
This will invoke the Python debugger at the start of every test.
.._breakpoints:
Setting breakpoints
@@ -200,8 +259,8 @@ Pytest supports the use of ``breakpoint()`` with the following behaviours:
- When ``breakpoint()`` is called and ``PYTHONBREAKPOINT`` is set to the default value, pytest will use the custom internal PDB trace UI instead of the system default ``Pdb``.
- When tests are complete, the system will default back to the system ``Pdb`` trace UI.
-If``--pdb``is called on execution of pytest, the custom internal Pdb trace UI is used on``bothbreakpoint()`` and failed tests/unhandled exceptions.
- If``--pdbcls``is used, the custom class debugger will be executed when a test fails (as expected within existing behaviour), but also when ``breakpoint()`` is called from within a test, the custom class debugger will be instantiated.
-With``--pdb``passed to pytest, the custom internal Pdb trace UI is used with both``breakpoint()`` and failed tests/unhandled exceptions.
-``--pdbcls``can be used to specify a custom debugger class.
.._durations:
@@ -214,6 +273,7 @@ To get a list of the slowest 10 test durations::
pytest --durations=10
By default, pytest will not show test durations that are too small (<0.01s) unless ``-vv`` is passed on the command-line.
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.