While it should be closed in logging's shutdown [1], the following would
still issue a ResourceWarning:
```
import logging
log_file_handler = logging.FileHandler("temp.log", mode="w", encoding="UTF-8")
root_logger = logging.getLogger()
root_logger.addHandler(log_file_handler)
root_logger.removeHandler(log_file_handler)
root_logger.error("error")
del log_file_handler
```
It looks like the weakref might get lost for some reason.
See https://github.com/pytest-dev/pytest/pull/4981/commits/92ffe42b45 / #4981
for more information.
1: c1419578a1/Lib/logging/__init__.py (L2107-L2139)
This methods were moved from xdist (ca03269).
Our intention is to keep this code closer to the core, given that it
might break easily due to refactorings.
Having it in the core might also allow to improve the code by moving
some responsibility to the "code" objects (ReprEntry, etc) which
are often found in the reports.
Finally pytest-xdist and pytest-subtests can use those functions
instead of coding it themselves.
For strings fnmatch_lines converts it into a Source objects, splitted on
newlines. This is not necessary here, and it is more consistent to use
lists here in the first place.
Should be more helpful in case of errors than before:
> assert re.match(expected_message, str(excinfo.value))
E _pytest.warning_types.PytestWarning: asserting the value None, please use "assert is None"
https://travis-ci.org/pytest-dev/pytest/jobs/509970576#L208
Without the patch the test fails as follows:
# Prepending should call fixup_namespace_packages.
monkeypatch.syspath_prepend("world")
> import ns_pkg.world
E ModuleNotFoundError: No module named 'ns_pkg.world'
- cacheprovider: move call to _ensure_supporting_files
This makes it less likely to have a race here (which is not critical),
but happened previously probably with xdist, causing flaky coverage with
`if not readme_path.is_file():` etc checks in
`_ensure_supporting_files`, which has been removed in the `features`
branch already.
This avoids loading user configuration, which might interfere with test
results, e.g. a `~/.pdbrc.py` with pdb++.
Also sets USERPROFILE, which will be required with Python 3.8 [1].
1: https://bugs.python.org/issue36264
Fixes:
> File "…/Vcs/pytest/src/_pytest/config/__init__.py", line 60, in main
> config = _prepareconfig(args, plugins)
> File "…/Vcs/pytest/src/_pytest/config/__init__.py", line 179, in _prepareconfig
> raise TypeError(msg.format(args, type(args)))
> TypeError: `args` parameter expected to be a list or tuple of strings, got: 'empty.py' (type: <class 'str'>)
without this change, the python-apache-libcloud tests failed
in the year 2039 with
fp.write(struct.pack("<ll", mtime, size))
E error: 'l' format requires -2147483648 <= number <= 2147483647
Remove `py27-nobyte` from tox.ini, which was using xdist already.
Therefore this also removes `py27-xdist` from Travis.
"nobyte" was added in 036557ac to test that test_assertrewrite.py works
with a global PYTHONDONTWRITEBYTECODE=1 setting.
"numpy" is only a special dependency, and can be run together with
nobyte/xdist.
Also changed how the section is presented: instead of "Note" blocks, use proper
sections as those contain enough information to exist on their own.
Fix#1680
While I think that pdb++ should be fixed in this regard (by using
`pdb.Pdb`, and not `self.__class__` maybe), this ensures that custom
debuggers like this are working.
This can spam a lot of warnings (per invocation), e.g.:
> lsof: WARNING: can't stat() nsfs file system /run/docker/netns/default
Output information may be incomplete.
Or from Travis/MacOS:
> lsof: WARNING: can't stat() vmhgfs file system /Volumes/VMware Shared Folders
> Output information may be incomplete.
> assuming "dev=31000003" from mount table
This adds the `--ignore-glob` option to allow Unix-style wildcards so
that `--ignore-glob=integration*` excludes all tests that reside in
files starting with `integration`.
Fixes: #3711
Less hacky way to make XPASS yellow markup. Make sure collect reports still have a "when" attribute.
xfail changed to XFAIL in the test report, for consistency with other outcomes which are all CAPS
Regarding tests: it merges ``test_pdb_interaction``,
``test_pdb_print_captured_stdout``, and
``test_pdb_print_captured_stderr`` into
``test_pdb_print_captured_stdout_and_stderr`` (clarity and performance,
especially since pexpect tests are slow).
* "legacy" is no longer a copy of "xunit1"
* Attempts to use "legacy" will redirect to "xunit1"
* record_xml_attribute is not compatible outside of legacy family
* Replace call to method/override raw() with to_xml()
* Remove non-standard testcase elements: 'file' and 'line'
* Replace testcase element 'skips' with 'skipped'
* Time resolution uses the standard format: 0.000
* Tests use corrected XML output with proper attributes
- I wrote a thing: https://github.com/asottile/dead
- wanted to try it out, there's lots of false positives and I didn't look
through all the things it pointed out but here's some
When rewriting assertions, pytest makes a call to
`__name__` on each object in a comparision. If one of
the objects has reimplemented `__getattr__`, they could
fail trying to fetch `__name__` with an error other than
`AttributeError`, which is what `hasattr` catches.
In this case, the stack trace for the failed `__getattr__`
call will show up in the pytest output, even though
it isn't related to the test failing.
This change fixes that by catching exceptions
that `hasattr` throws.
Pytest rewrites assertions so that the items on each
side of a comoparison will have easier-to-read names
in case of an assertion error.
Before doing this, it checks to make sure the object
doesn't have a __name__ attribute; however, it uses
`hasattr` so if the objects __getattr__ is broken then
the test failure message will be the stack trace
for this failure instead of a rewritten assertion.
The existing examples had 0 tests collected so didn't show the actual summary report. Also I added a section explaining the difference between `p` and `P`.
Just realized while reading the changelog that "junit_time" is not a very good
name, so I decided to open this PR renaming it to "junit_duration_report" which
I believe conveys the meaning of the option better
Often in large test suites (like pytest's), the -ra summary is very useful
to obtain a list of failures so we can execute each test at once to fix them.
Problem is the default shows errors and failures first, which leads to a lot
of scrolling to get to them.
This still does not use an actual argument parser, which only gets
instantiated below, and it does not appear to make sense instantiating
it just for this pre-parsing it seems.
`-p` without the required value is being handled before already though,
so it could potentially be passed down from somewhere already?!
Fixes https://github.com/pytest-dev/pytest/issues/3532.
Edited the changelog for extra clarity, and to fire off auto-formatting
Oddly enough, keeping `filename='{filename!r}'` caused an error while
collecting tests, but getting rid of the single ticks fixed it
Hopefully closes#3191
This was documented before, but never enforced. Passing non-strings could
have strange side-effects and enforcing a string simplifies other
implementation.
This handles `header` similar to Python 3.7 does it, and forwards any
other keyword arguments to the Pdb constructor.
This allows for `__import__("pdb").set_trace(skip=["foo.*"])`.
Fixes https://github.com/pytest-dev/pytest/issues/4416.
approx() was updated in 9f3122fe to work better with numpy arrays,
however at the same time the requirements were tightened from
requiring an Iterable to requiring a Sequence - the former being
tested only on interface, while the latter requires subclassing or
registration with the abc.
Since the ApproxSequence only used __iter__ and __len__ this commit
reduces the requirement to only what's used, and allows unregistered
Sequence-like containers to be used.
Since numpy arrays qualify for the new criteria, reorder the checks so
that generic sequences are checked for after numpy arrays.
Do not cause a SyntaxError for something like:
> DeprecationWarning: invalid escape sequence \w
This was happening via pdb++ when it imported pygments (and that had no
compiled .pyc file).
- use py27-pexpect,py27-trial,py27-numpy and py37-xdist in baseline,
using pexpect there catches errors with pdb tests early, and
py37-xdist is much faster than py37.
- move py34 and py36 out of baseline.
To keep existing tests which emit RemovedInPytest4Warnings running, decided
to go with a command line option because:
* Is harder to integrate an ini option with tests which already use an ini file
* It also marks tests which need to be removed/updated in 4.1, when
RemovedInPytest4Warning and related functionality are removed.
Fix#3737
Ref: https://github.com/pytest-dev/pytest/issues/4321#issuecomment-436951894
Hardens some of the not many tests affected by this:
1. `testing/test_session.py::test_rootdir_option_arg` displayed:
> root/test_rootdir_option_arg2/test_rootdir_option_arg.py
2. `test_cmdline_python_namespace_package` displayed "hello/" prefix for:
> hello/test_hello.py::test_hello
> hello/test_hello.py::test_other
`_collect_report_last_write` might be None, when `pytest_collection` was
not called before. Not sure if this indicates another problem, but it
can be reproduced with `testing/test_collection.py::TestCollector::()::test_getcustomfile_roundtrip`.
Fixes https://github.com/pytest-dev/pytest/issues/4329
This is relevant when using runpytest in-process.
Fixes:
E def test_1(testdir):
E testdir.runpytest()
E > __import__('pdb').set_trace()
E
E ../../test_trace_after_runpytest.py:3:
E …/Vcs/pytest/src/_pytest/debugging.py:81: in set_trace
E tw = _pytest.config.create_terminal_writer(cls._config)
E
E config = None, args = (), kwargs = {}, tw = <py._io.terminalwriter.TerminalWriter object at 0x7f1097088160>
E
E def create_terminal_writer(config, *args, **kwargs):
E """Create a TerminalWriter instance configured according to the options
E in the config object. Every code which requires a TerminalWriter object
E and has access to a config object should use this function.
E """
E tw = py.io.TerminalWriter(*args, **kwargs)
E > if config.option.color == "yes":
E E AttributeError: 'NoneType' object has no attribute 'option'
This removes the hack added in https://github.com/pytest-dev/pytest/pull/3802.
Adjusts test:
- it appears to not have been changed to 7 intentionally.
- removes XXX comment, likely not relevant anymore since 6dac7743.
Also renames `_path2confmods` to `_dirpath2confmods` for clarity (it is
expected to be a dirpath in `_importconftest`).
Uses an explicit maxsize, since it appears to be only relevant for a
short period [1].
Removes the lru_cache on _getconftest_pathlist, which makes no
difference when caching _getconftestmodules, at least with the
performance test of 100x10 files (#4237).
1: https://github.com/pytest-dev/pytest/pull/4237#discussion_r228528007
The type `bytes` is available on all supported Python versions. On
Python 2.7, it is an alias of str, same as six.binary_type.
Makes the code slightly more forward compatible.
This removes the hack added in https://github.com/pytest-dev/pytest/pull/3802.
Adjusts test:
- it appears to not have been changed to 7 intentionally.
- removes XXX comment, likely not relevant anymore since 6dac7743.
After `pdb.set_trace()` capturing is turned off.
This patch resumes it after using the `continue` (or `c` / `cont`)
command.
Store _pytest_capman on the class, for pdbpp's do_debug hack to keep it.
Without this, `debug …` would fail like this:
/usr/lib/python3.6/cmd.py:217: in onecmd
return func(arg)
.venv/lib/python3.6/site-packages/pdb.py:608: in do_debug
return orig_do_debug(self, arg)
/usr/lib/python3.6/pdb.py:1099: in do_debug
sys.call_tracing(p.run, (arg, globals, locals))
/usr/lib/python3.6/bdb.py:434: in run
exec(cmd, globals, locals)
/usr/lib/python3.6/bdb.py:51: in trace_dispatch
return self.dispatch_line(frame)
/usr/lib/python3.6/bdb.py:69: in dispatch_line
self.user_line(frame)
/usr/lib/python3.6/pdb.py:261: in user_line
self.interaction(frame, None)
.venv/lib/python3.6/site-packages/pdb.py:203: in interaction
self.setup(frame, traceback)
E AttributeError: 'PytestPdb' object has no attribute '_pytest_capman'
- add pytest_leave_pdb hook
- fixes test_pdb_interaction_capturing_twice: would fail on master now,
but works here
For pytest's own suite the `cache_info()` looks as follows:
> session.config._getconftest_pathlist.cache_info()
CacheInfo(hits=231, misses=19, maxsize=None, currsize=19)
While it does not really make a difference for me this might help with
larger test suites / the case mentioned in
https://github.com/pytest-dev/pytest/issues/2206#issuecomment-432623646.
This has been failing as of 2018-10-23 while installing gcc with
this message:
==> Installing numpy dependency: gcc
==> Downloading https://homebrew.bintray.com/bottles/gcc-8.2.0.high_sierra.bottl
######################################################################## 100.0%
==> Pouring gcc-8.2.0.high_sierra.bottle.1.tar.gz
Error: The `brew link` step did not complete successfully
The formula built, but is not symlinked into /usr/local
Could not symlink include/c++
Target /usr/local/include/c++
already exists. You may want to remove it:
rm '/usr/local/include/c++'
To force the link and overwrite all conflicting files:
brew link --overwrite gcc
To list all files that would be deleted:
brew link --overwrite --dry-run gcc
Possible conflicting files are:
/usr/local/include/c++ -> /usr/local/Caskroom/oclint/0.13.1,17.4.0/oclint-0.13.1/include/c++
Running `pytest -k doesnotmatch` on pytest's own tests takes ~3s with
Kitty terminal for me, but only ~1s with `-q`.
It also is faster with urxvt, but still takes 2.2s there.
This patch only calls `report_collect` every 0.1s, which is good enough
for reporting collection progress, and improves the time with both Kitty
and urxvt to ~1.2s for me.
Some points on the document work different in 3.9, plus changed the order
of the sections a bit to make more sense for users reading it for the first time.
Using string formatting with the raw escaped object lead to string evaluation
"<py._xmlgen.raw object>"
Format the unescaped string first, then use the XML escape method as a last step.
`help quit` in pdb says:
> Quit from the debugger. The program being executed is aborted.
But pytest would continue with the next tests, often making it necessary
to kill the pytest process when using `--pdb` and trying to cancel the
tests using `KeyboardInterrupt` / `Ctrl-C`.
This fixes running `pytest tests/test_foo.py::test_bar`, where `tests`
is a symlink to `project/app/tests`: previously
`project/app/conftest.py` would be ignored for fixtures then.
This gets printed by the terminal reporter already, and currently
results in the same error being displayed twice, e.g. when raising an
`Exception` manually from `pytest.debugging.pytest_exception_interact`.
`expect()` expects an regular expression, so "Pdb" is equivalent to
"(Pdb)".
But instead of escaping the parenthesis this patch removes them, to
allow for matching "(Pdb++)", too.
Our policy is to not deprecate features during bugfix releases, but in this
case I believe it makes sense as we are only documenting it as deprecated,
without issuing warnings which might potentially break test suites.
This will get the word out that hook implementers should not use this parameter
at all.
Fix#4036
This should increase coverage for subprocesses, where previously
`source` paths were used only from the config file, but not the initial
`--source` argument.
Unfortunately we need to get a `py.path.local` object to perform the fnmatch
operation, it is different from the standard `fnmatch` module because it
implements its own custom logic. So we need to use `py.path` to perform
the fnmatch for backward compatibility reasons.
Ideally we should be able to use a "pure path" in `pathlib` terms (a path
not bound to the file system), but we don't have those in pylib.
Fix#3973
Optionally raise an exception when parametrize collects no arguments.
Provide the name of the test causing the failure in the exception
message.
See: #3849
* test_lf_and_ff_obey_verbosity is no longer necessary because
test_lf_and_ff_prints_no_needless_message already checks if the proper messages
are displayed when -q is used.
* Improve test_lf_and_ff_prints_no_needless_message so we also check that
the correct message is displayed when there are failures to run
Given that our guidelines demand that the CI have already passed, it seems
wasteful to run all those jobs again for the exact same commit.
As discussed in https://github.com/pytest-dev/pytest/pull/3906#issuecomment-417094481,
this will skip the "test" stage when building a tag for deployment.
doctesting: remove changedir
With coverage 5 we could use COVERAGE_RCFILE to make it find the
.coveragerc, or we could add `--rcfile` to _PYTEST_TOX_COVERAGE_RUN, but
I've thought that this should not be the job that has to test if
`--pyargs` actually works.
- Skips pypy for coverage, reports only py37 to coveralls
- tox: allow for TOXENV=py37-coverage
- tracks coverage in subprocesses, using coverage-enable-subprocess, and
parallel=1
- removes usedevelop with doctesting to match `--source` being used with
coverage
- keep coveralls for now, used with AppVeyor
What happens is that atomic_write on Python 2.7 on Windows will try
to convert the paths to unicode, but this triggers the import of
the encoding module for the file system codec, which in turn triggers
the rewrite, which in turn again tries to import the module, and so on.
This short-circuits the cases where we try to import another file when
writing a pyc file; I don't expect this to affect anything because
the only modules that could be affected are those imported by
atomic_writes.
Fix#3506
I just noticed the newly committed code block doesn't format as a code block without `::` in the paragraph before. Perhaps doesn't make a difference, but also corrected 5 spaces to 4 which seems standard.
It seems pytest's very comprehensive CI sniffed out a few other places with similar bugs. Ideally we should find all the places where args are not stringy and solve it at the source, but who knows how many people are relying on the implicit string conversion. See [here](https://github.com/pytest-dev/pytest/blob/master/src/_pytest/config/__init__.py#L160-L166) for one such problem area (args with a single py.path.local instance is converted here, but a list or tuple containing some are not).
The problem was that _matchnodes would receive two items: [DoctestModule, Module]. It would then collect the first one, *cache it*, and fail to match against the name in the command line. Next, it would reuse the cached item (DoctestModule) instead of collecting the Module which would eventually find the "test" name on it.
Added the type of the node to the cache key to avoid this problem, although I'm not a big fan of caches that have different key types.
Fix#3843
Add a check before printing teardown logs.
'print_teardown_sections' method does not check '--show-capture' option
value, and teardown logs are always printed.
Resolves: #3816
If `PYTEST_DISABLE_PLUGIN_AUTOLOAD` is set, disable auto-loading of
plugins through setuptools entrypoints. Only plugins that have been
explicitly specified are loaded.
ref #3784.
2018-08-07 13:16:28 -04:00
253 changed files with 17378 additions and 7367 deletions
@@ -3,7 +3,7 @@ Thanks for submitting a PR, your contribution is really appreciated!
Here's a quick checklist that should be present in PRs (you can delete this text from the final description, this is
just a guideline):
- [ ] Create a new changelog file in the `changelog` folder, with a name like `<ISSUE NUMBER>.<TYPE>.rst`. See [changelog/README.rst](/changelog/README.rst) for details.
- [ ] Create a new changelog file in the `changelog` folder, with a name like `<ISSUE NUMBER>.<TYPE>.rst`. See [changelog/README.rst](https://github.com/pytest-dev/pytest/blob/master/changelog/README.rst) for details.
- [ ] Target the `master` branch for bug fixes, documentation updates and trivial changes.
- [ ] Target the `features` branch for new features and removals/deprecations.
- [ ] Include documentation when adding new features.
#. Follow **PEP-8** for naming and `black <https://github.com/ambv/black>`_ for formatting.
#. Tests are run using ``tox``::
tox -e linting,py27,py36
tox -e linting,py27,py37
The test environments above are usually enough to cover most cases locally.
@@ -237,12 +237,12 @@ Here is a simple overview, with pytest-specific bits:
#. Run all the tests
You need to have Python 2.7 and 3.6 available in your system. Now
You need to have Python 2.7 and 3.7 available in your system. Now
running tests is as simple as issuing this command::
$ tox -e linting,py27,py36
$ tox -e linting,py27,py37
This command will run tests via the "tox" tool against Python 2.7 and 3.6
This command will run tests via the "tox" tool against Python 2.7 and 3.7
and also perform "lint" coding-style checks.
#. You can now edit your local working copy and run the tests again as necessary. Please follow PEP-8 for naming.
@@ -252,9 +252,9 @@ Here is a simple overview, with pytest-specific bits:
$ tox -e py27 -- --pdb
Or to only run tests in a particular test module on Python 3.6::
Or to only run tests in a particular test module on Python 3.7::
$ tox -e py36 -- testing/test_config.py
$ tox -e py37 -- testing/test_config.py
When committing, ``pre-commit`` will re-format the files if necessary.
@@ -280,6 +280,47 @@ Here is a simple overview, with pytest-specific bits:
base: features # if it's a feature
Writing Tests
----------------------------
Writing tests for plugins or for pytest itself is often done using the `testdir fixture <https://docs.pytest.org/en/latest/reference.html#testdir>`_, as a "black-box" test.
For example, to ensure a simple test passes you can write:
..code-block::python
deftest_true_assertion(testdir):
testdir.makepyfile(
"""
def test_foo():
assert True
"""
)
result=testdir.runpytest()
result.assert_outcomes(failed=0,passed=1)
Alternatively, it is possible to make checks based on the actual output of the termal using
========================== 1 failed in 0.04 seconds ===========================
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used. See `getting-started <http://docs.pytest.org/en/latest/getting-started.html#our-first-test-run>`_ for more examples.
Due to ``pytest``'s detailed assertion introspection, only plain ``assert`` statements are used. See `getting-started <https://docs.pytest.org/en/latest/getting-started.html#our-first-test-run>`_ for more examples.
Features
--------
- Detailed info on failing `assert statements <http://docs.pytest.org/en/latest/assert.html>`_ (no need to remember ``self.assert*`` names);
- Detailed info on failing `assert statements <https://docs.pytest.org/en/latest/assert.html>`_ (no need to remember ``self.assert*`` names);
before you import it (a good place to do that is in your root ``conftest.py``).
For further information, Benjamin Peterson wrote up `Behind the scenes of pytest's new assertion rewriting <http://pybites.blogspot.com/2011/07/behind-scenes-of-pytests-new-assertion.html>`_.
Assertion rewriting caches files on disk
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
``pytest`` will write back the rewritten modules to disk for caching. You can disable
this behavior (for example to avoid leaving stale ``.pyc`` files around in projects that
move files around a lot) by adding this to the top of your ``conftest.py`` file:
..code-block::python
importsys
sys.dont_write_bytecode=True
Note that you still get the benefits of assertion introspection, the only change is that
the ``.pyc`` files won't be cached on disk.
Additionally, rewriting will silently skip caching if it cannot write new ``.pyc`` files,
i.e. in a read-only filesystem or a zipfile.
Disabling assert rewriting
~~~~~~~~~~~~~~~~~~~~~~~~~~
``pytest`` rewrites test modules on import by using an import
hook to write new ``pyc`` files. Most of the time this works transparently.
However, if you are working with the import machinery yourself, the import hook may
interfere.
If this is the case you have two options:
* Disable rewriting for a specific module by adding the string
``PYTEST_DONT_REWRITE`` to its docstring.
* Disable rewriting for all modules by using ``--assert=plain``.
..versionadded:: 2.1
Add assert rewriting as an alternate introspection technique.
@@ -7,14 +7,16 @@ Keeping backwards compatibility has a very high priority in the pytest project.
With the pytest 3.0 release we introduced a clear communication scheme for when we will actually remove the old busted joint and politely ask you to use the new hotness instead, while giving you enough time to adjust your tests or raise concerns if there are valid reasons to keep deprecated functionality around.
To communicate changes we are already issuing deprecation warnings, but they are not displayed by default. In pytest 3.0 we changed the default setting so that pytest deprecation warnings are displayed if not explicitly silenced (with ``--disable-pytest-warnings``).
To communicate changes we issue deprecation warnings using a custom warning hierarchy (see :ref:`internal-warnings`). These warnings may be suppressed using the standard means: ``-W`` command-line flag or ``filterwarnings`` ini options (see :ref:`warnings`), but we suggest to use these sparingly and temporarily, and heed the warnings when possible.
We will only remove deprecated functionality in major releases (e.g. if we deprecate something in 3.0 we will remove it in 4.0), and keep it around for at least two minor releases (e.g. if we deprecate something in 3.9 and 4.0 is the next release, we will not remove it in 4.0 but in 5.0).
We will only start the removal of deprecated functionality in major releases (e.g. if we deprecate something in 3.0 we will start to remove it in 4.0), and keep it around for at least two minor releases (e.g. if we deprecate something in 3.9 and 4.0 is the next release, we start to remove it in 5.0, not in 4.0).
When the deprecation expires (e.g. 4.0 is released), we won't remove the deprecated functionality immediately, but will use the standard warning filters to turn them into **errors** by default. This approach makes it explicit that removal is imminent, and still gives you time to turn the deprecated feature into a warning instead of an error so it can be dealt with in your own time. In the next minor release (e.g. 4.1), the feature will be effectively removed.
Deprecation Roadmap
-------------------
We track deprecation and removal of features using milestones and the `deprecation <https://github.com/pytest-dev/pytest/issues?q=label%3A%22type%3A+deprecation%22>`_ and `removal <https://github.com/pytest-dev/pytest/labels/type%3A%20removal>`_ labels on GitHub.
Features currently deprecated and removed in previous releases can be found in :ref:`deprecations`.
Following our deprecation policy, after starting issuing deprecation warnings we keep features for *at least* two minor versions before considering removal.
We track future deprecation and removal of features using milestones and the `deprecation <https://github.com/pytest-dev/pytest/issues?q=label%3A%22type%3A+deprecation%22>`_ and `removal <https://github.com/pytest-dev/pytest/labels/type%3A%20removal>`_ labels on GitHub.
@@ -12,7 +12,9 @@ For information on plugin hooks and objects, see :ref:`plugins`.
For information on the ``pytest.mark`` mechanism, see :ref:`mark`.
For information about fixtures, see :ref:`fixtures`. To see a complete list of available fixtures (add ``-v`` to also see fixtures with leading ``_``), type ::
For information about fixtures, see :ref:`fixtures`. To see a complete list of available fixtures (add ``-v`` to also see fixtures with leading ``_``), type :
..code-block::pytest
$ pytest -q --fixtures
cache
@@ -26,25 +28,29 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
Values can be any object handled by the json stdlib module.
capsys
Enable capturing of writes to ``sys.stdout`` and ``sys.stderr`` and make
captured output available via ``capsys.readouterr()`` method calls
which return a ``(out, err)`` namedtuple. ``out`` and ``err`` will be ``text``
objects.
Enable text capturing of writes to ``sys.stdout`` and ``sys.stderr``.
The captured output is made available via ``capsys.readouterr()`` method
calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``text`` objects.
capsysbinary
Enable capturing of writes to ``sys.stdout`` and ``sys.stderr`` and make
captured output available via ``capsys.readouterr()`` method calls
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be ``bytes``
objects.
Enable bytes capturing of writes to ``sys.stdout`` and ``sys.stderr``.
The captured output is made available via ``capsysbinary.readouterr()``
method calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``bytes`` objects.
capfd
Enable capturing of writes to file descriptors ``1`` and ``2`` and make
captured output available via ``capfd.readouterr()`` method calls
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be ``text``
objects.
Enable text capturing of writes to file descriptors ``1`` and ``2``.
The captured output is made available via ``capfd.readouterr()`` method
calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``text`` objects.
capfdbinary
Enable capturing of write to file descriptors 1 and 2 and make
captured output available via ``capfdbinary.readouterr`` method calls
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be
``bytes`` objects.
Enable bytes capturing of writes to file descriptors ``1`` and ``2``.
The captured output is made available via ``capfd.readouterr()`` method
calls, which return a ``(out, err)`` namedtuple.
``out`` and ``err`` will be ``byte`` objects.
doctest_namespace
Fixture that returns a :py:class:`dict` that will be injected into the namespace of doctests.
pytestconfig
@@ -53,7 +59,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
Example::
def test_foo(pytestconfig):
if pytestconfig.getoption("verbose"):
if pytestconfig.getoption("verbose") > 0:
...
record_property
Add an extra properties the calling test.
@@ -66,8 +72,6 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
def test_function(record_property):
record_property("example_key", 1)
record_xml_property
(Deprecated) use record_property.
record_xml_attribute
Add extra xml attributes to the tag for the calling test.
The fixture is callable with ``(name, value)``, with value being
@@ -75,7 +79,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
caplog
Access and control log capturing.
Captured logs are available through the following methods::
Captured logs are available through the following properties/methods::
You can instruct pytest to clear all cache files and values
by adding the ``--cache-clear`` option like this::
by adding the ``--cache-clear`` option like this:
..code-block::bash
pytest --cache-clear
This is recommended for invocations from Continuous Integration
servers where isolation and correctness is more important
than speed.
Stepwise
--------
As an alternative to ``--lf -x``, especially for cases where you expect a large part of the test suite will fail, ``--sw``, ``--stepwise`` allows you to fix them one at a time. The test suite will run until the first failure and then stop. At the next invocation, tests will continue from the last failing test and then run until the next failing test. You may use the ``--stepwise-skip`` option to ignore one failing test and stop the test execution on the second failing test instead. This is useful if you get stuck on a failing test and just want to ignore it until later.
*``node.warn(PytestWarning("some message"))``: is now the **recommended** way to call this function.
The warning instance must be a PytestWarning or subclass.
*``node.warn("CI", "some message")``: this code/message form has been **removed** and should be converted to the warning instance form above.
record_xml_property
~~~~~~~~~~~~~~~~~~~
..versionremoved:: 4.0
The ``record_xml_property`` fixture is now deprecated in favor of the more generic ``record_property``, which
can be used by other consumers (for example ``pytest-html``) to obtain custom information about the test run.
This is just a matter of renaming the fixture as the API is the same:
..code-block::python
deftest_foo(record_xml_property):
...
Change to:
..code-block::python
deftest_foo(record_property):
...
Passing command-line string to ``pytest.main()``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
..versionremoved:: 4.0
Passing a command-line string to ``pytest.main()`` is deprecated:
..code-block::python
pytest.main("-v -s")
Pass a list instead:
..code-block::python
pytest.main(["-v","-s"])
By passing a string, users expect that pytest will interpret that command-line using the shell rules they are working
on (for example ``bash`` or ``Powershell``), but this is very hard/impossible to do in a portable way.
Calling fixtures directly
~~~~~~~~~~~~~~~~~~~~~~~~~
..versionremoved:: 4.0
Calling a fixture function directly, as opposed to request them in a test function, is deprecated.
For example:
..code-block::python
@pytest.fixture
defcell():
return...
@pytest.fixture
deffull_cell():
cell=cell()
cell.make_full()
returncell
This is a great source of confusion to new users, which will often call the fixture functions and request them from test functions interchangeably, which breaks the fixture resolution model.
In those cases just request the function directly in the dependent fixture:
..code-block::python
@pytest.fixture
defcell():
return...
@pytest.fixture
deffull_cell(cell):
cell.make_full()
returncell
Alternatively if the fixture function is called multiple times inside a test (making it hard to apply the above pattern) or
if you would like to make minimal changes to the code, you can create a fixture which calls the original function together
with the ``name`` parameter:
..code-block::python
defcell():
return...
@pytest.fixture(name="cell")
defcell_fixture():
returncell()
``yield`` tests
~~~~~~~~~~~~~~~
..versionremoved:: 4.0
pytest supported ``yield``-style tests, where a test function actually ``yield`` functions and values
that are then turned into proper test methods. Example:
..code-block::python
defcheck(x,y):
assertx**x==y
deftest_squared():
yieldcheck,2,4
yieldcheck,3,9
This would result into two actual test functions being generated.
This form of test function doesn't support fixtures properly, and users should switch to ``pytest.mark.parametrize``:
..code-block::python
@pytest.mark.parametrize("x, y",[(2,4),(3,9)])
deftest_squared(x,y):
assertx**x==y
Internal classes accessed through ``Node``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
..versionremoved:: 4.0
Access of ``Module``, ``Function``, ``Class``, ``Instance``, ``File`` and ``Item`` through ``Node`` instances now issue
this warning::
usage of Function.Module is deprecated, please use pytest.Module instead
Users should just ``import pytest`` and access those objects using the ``pytest`` module.
This has been documented as deprecated for years, but only now we are actually emitting deprecation warnings.
``Node.get_marker``
~~~~~~~~~~~~~~~~~~~
..versionremoved:: 4.0
As part of a large :ref:`marker-revamp`, :meth:`_pytest.nodes.Node.get_marker` is deprecated. See
:ref:`the documentation <update marker code>` on tips on how to update your code.
``somefunction.markname``
~~~~~~~~~~~~~~~~~~~~~~~~~
..versionremoved:: 4.0
As part of a large :ref:`marker-revamp` we already deprecated using ``MarkInfo``
the only correct way to get markers of an element is via ``node.iter_markers(name)``.
``pytest_namespace``
~~~~~~~~~~~~~~~~~~~~
..versionremoved:: 4.0
This hook is deprecated because it greatly complicates the pytest internals regarding configuration and initialization, making some
bug fixes and refactorings impossible.
Example of usage:
..code-block::python
classMySymbol:
...
defpytest_namespace():
return{"my_symbol":MySymbol()}
Plugin authors relying on this hook should instead require that users now import the plugin modules directly (with an appropriate public API).
As a stopgap measure, plugin authors may still inject their names into pytest's namespace, usually during ``pytest_configure``:
..code-block::python
importpytest
defpytest_configure():
pytest.my_symbol=MySymbol()
Reinterpretation mode (``--assert=reinterp``)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
..versionremoved:: 3.0
Reinterpretation mode has now been removed and only plain and rewrite
mode are available, consequently the ``--assert=reinterp`` option is
no longer available. This also means files imported from plugins or
``conftest.py`` will not benefit from improved assertions by
default, you should use ``pytest.register_assert_rewrite()`` to
explicitly turn on assertion rewriting for those files.
Removed command-line options
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
..versionremoved:: 3.0
The following deprecated commandline options were removed:
*``--genscript``: no longer supported;
*``--no-assert``: use ``--assert=plain`` instead;
*``--nomagic``: use ``--assert=plain`` instead;
*``--report``: use ``-r`` instead;
py.test-X* entry points
~~~~~~~~~~~~~~~~~~~~~~~
..versionremoved:: 3.0
Removed all ``py.test-X*`` entry points. The versioned, suffixed entry points
were never documented and a leftover from a pre-virtualenv era. These entry
points also created broken entry points in wheels, so removing them also
Registering markers for your test suite is simple::
Registering markers for your test suite is simple:
..code-block::ini
# content of pytest.ini
[pytest]
markers=
webtest: mark a test as a webtest.
You can ask which markers exist for your test suite - the list includes our just defined ``webtest`` markers::
You can ask which markers exist for your test suite - the list includes our just defined ``webtest`` markers:
..code-block::pytest
$ pytest --markers
@pytest.mark.webtest: mark a test as a webtest.
@pytest.mark.filterwarnings(warning): add a warning filter to the given test. see http://pytest.org/latest/warnings.html#pytest-mark-filterwarnings
@pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see https://docs.pytest.org/en/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See https://docs.pytest.org/en/latest/skipping.html
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see https://docs.pytest.org/en/latest/parametrize.html for more info and examples.
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see https://docs.pytest.org/en/latest/fixture.html#usefixtures
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
@@ -237,14 +267,19 @@ Marking whole classes or modules
and an example invocations specifying a different environment than what
the test needs::
the test needs:
..code-block::pytest
$ pytest -E stage2
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 1 item
test_someenv.py s [100%]
======================== 1 skipped in 0.12 seconds =========================
and here is one that specifies exactly the environment needed::
and here is one that specifies exactly the environment needed:
..code-block::pytest
$ pytest -E stage1
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 1 item
test_someenv.py . [100%]
========================= 1 passed in 0.12 seconds =========================
The ``--markers`` option always gives you a list of available markers::
The ``--markers`` option always gives you a list of available markers:
..code-block::pytest
$ pytest --markers
@pytest.mark.env(name): mark test to run only on named environment
@pytest.mark.filterwarnings(warning): add a warning filter to the given test. see http://pytest.org/latest/warnings.html#pytest-mark-filterwarnings
@pytest.mark.filterwarnings(warning): add a warning filter to the given test. see https://docs.pytest.org/en/latest/warnings.html#pytest-mark-filterwarnings
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see https://docs.pytest.org/en/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See https://docs.pytest.org/en/latest/skipping.html
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see https://docs.pytest.org/en/latest/parametrize.html for more info and examples.
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see https://docs.pytest.org/en/latest/fixture.html#usefixtures
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
@@ -400,31 +460,40 @@ Passing a callable to custom markers
.. regendoc:wipe
Below is the config file that will be used in the next examples::
Below is the config file that will be used in the next examples:
..code-block::python
# content of conftest.py
importsys
defpytest_runtest_setup(item):
formarker in item.iter_markers(name='my_marker'):
for marker initem.iter_markers(name="my_marker"):
print(marker)
sys.stdout.flush()
A custom marker can have its argument set, i.e. ``args`` and ``kwargs`` properties, defined by either invoking it as a callable or using ``pytest.mark.MARKER_NAME.with_args``. These two methods achieve the same effect most of the time.
However, if there is a callable as the single positional argument with no keyword arguments, using the ``pytest.mark.MARKER_NAME(c)`` will not pass ``c`` as a positional argument but decorate ``c`` with the custom marker (see :ref:`MarkDecorator <mark>`). Fortunately, ``pytest.mark.MARKER_NAME.with_args`` comes to the rescue::
However, if there is a callable as the single positional argument with no keyword arguments, using the ``pytest.mark.MARKER_NAME(c)`` will not pass ``c`` as a positional argument but decorate ``c`` with the custom marker (see :ref:`MarkDecorator <mark>`). Fortunately, ``pytest.mark.MARKER_NAME.with_args`` comes to the rescue:
..code-block::python
# content of test_custom_marker.py
importpytest
defhello_world(*args,**kwargs):
return 'Hello World'
return "Hello World"
@pytest.mark.my_marker.with_args(hello_world)
deftest_with_args():
pass
The output is as follows::
The output is as follows:
..code-block::pytest
$ pytest -q -s
Mark(name='my_marker', args=(<function hello_world at 0xdeadbeef>,), kwargs={})
@@ -442,12 +511,16 @@ Reading markers which were set from multiple places
.. regendoc:wipe
If you are heavily using markers in your test suite you may encounter the case where a marker is applied several times to a test function. From plugin
code you can read over all such settings. Example::
code you can read over all such settings. Example:
..code-block::python
# content of test_mark_three_times.py
importpytest
pytestmark=pytest.mark.glob("module",x=1)
@pytest.mark.glob("class",x=2)
classTestClass(object):
@pytest.mark.glob("function",x=3)
@@ -455,17 +528,22 @@ code you can read over all such settings. Example::
pass
Here we have the marker "glob" applied three times to the same
test function. From a conftest file we can read it like this::
test function. From a conftest file we can read it like this:
======================= no tests ran in 0.12 seconds =======================
@@ -191,33 +197,37 @@ only have to work a bit to construct the correct arguments for pytest's
def test_demo2(self, attribute):
assert isinstance(attribute, str)
this is a fully self-contained example which you can run with::
this is a fully self-contained example which you can run with:
..code-block::pytest
$ pytest test_scenarios.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 4 items
test_scenarios.py .... [100%]
========================= 4 passed in 0.12 seconds =========================
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function:
..code-block::pytest
$ pytest --collect-only test_scenarios.py
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 4 items
<Module 'test_scenarios.py'>
<Class 'TestSampleWithScenarios'>
<Instance '()'>
<Function 'test_demo1[basic]'>
<Function 'test_demo2[basic]'>
<Function 'test_demo1[advanced]'>
<Function 'test_demo2[advanced]'>
<Module test_scenarios.py>
<Class TestSampleWithScenarios>
<Function test_demo1[basic]>
<Function test_demo2[basic]>
<Function test_demo1[advanced]>
<Function test_demo2[advanced]>
======================= no tests ran in 0.12 seconds =======================
@@ -269,20 +279,25 @@ creates a database object for the actual test invocations::
else:
raise ValueError("invalid internal test config")
Let's first see how it looks like at collection time::
Let's first see how it looks like at collection time:
..code-block::pytest
$ pytest test_backends.py --collect-only
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 2 items
<Module 'test_backends.py'>
<Function 'test_db_initialized[d1]'>
<Function 'test_db_initialized[d2]'>
<Module test_backends.py>
<Function test_db_initialized[d1]>
<Function test_db_initialized[d2]>
======================= no tests ran in 0.12 seconds =======================
And then when we run the test::
And then when we run the test:
..code-block::pytest
$ pytest -q test_backends.py
.F [100%]
@@ -330,15 +345,18 @@ will be passed to respective fixture function::
assert x == 'aaa'
assert y == 'b'
The result of this test will be successful::
The result of this test will be successful:
..code-block::pytest
$ pytest test_indirect_list.py --collect-only
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 1 item
<Module 'test_indirect_list.py'>
<Function 'test_indirect[a-b]'>
<Module test_indirect_list.py>
<Function test_indirect[a-b]>
======================= no tests ran in 0.12 seconds =======================
@@ -375,10 +393,13 @@ parametrizer`_ but in a lot less code::
assert a == b
def test_zerodivision(self, a, b):
pytest.raises(ZeroDivisionError, "a/b")
with pytest.raises(ZeroDivisionError):
a / b
Our test generator looks up a class-level definition which specifies which
argument sets to use for each test function. Let's run it::
argument sets to use for each test function. Let's run it:
..code-block::pytest
$ pytest -q
F.. [100%]
@@ -408,12 +429,14 @@ is to be run with different sets of arguments for its three arguments:
..literalinclude:: multipython.py
Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize)::
Running it results in some skips if we don't have all the python interpreters installed and otherwise runs all combinations (5 interpreters times 5 interpreters times 3 objects to serialize/deserialize):
..code-block::pytest
. $ pytest -rs -q multipython.py
...sss...sssssssss...sss... [100%]
========================= short test summary info ==========================
SKIP [15] $REGENDOC_TMPDIR/CWD/multipython.py:28: 'python3.4' not found
SKIPPED [15] $REGENDOC_TMPDIR/CWD/multipython.py:30: 'python3.4' not found
12 passed, 15 skipped in 0.12 seconds
Indirect parametrization of optional implementations/imports
@@ -457,17 +480,20 @@ And finally a little test module::
We use the ``request.module`` attribute to optionally obtain an
``smtpserver`` attribute from the test module. If we just execute
again, nothing much has changed::
again, nothing much has changed:
..code-block::pytest
$ pytest -s -q --tb=no
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
@@ -476,7 +494,9 @@ server URL in its module namespace::
def test_showhelo(smtp_connection):
assert 0, smtp_connection.helo()
Running it::
Running it:
..code-block::pytest
$ pytest -qq --tb=short test_anothersmtp.py
F [100%]
@@ -578,7 +598,9 @@ The main change is the declaration of ``params`` with
:py:func:`@pytest.fixture <_pytest.python.fixture>`, a list of values
for each of which the fixture function will execute and can access
a value via ``request.param``. No test function code needs to change.
So let's just do another run::
So let's just do another run:
..code-block::pytest
$ pytest -q test_module.py
FFFF [100%]
@@ -614,7 +636,7 @@ So let's just do another run::
response, msg = smtp_connection.ehlo()
assert response == 250
> assert b"smtp.gmail.com" in msg
E AssertionError: assert b'smtp.gmail.com' in b'mail.python.org\nPIPELINING\nSIZE 51200000\nETRN\nSTARTTLS\nAUTH DIGEST-MD5 NTLM CRAM-MD5\nENHANCEDSTATUSCODES\n8BITMIME\nDSN\nSMTPUTF8'
E AssertionError: assert b'smtp.gmail.com' in b'mail.python.org\nPIPELINING\nSIZE 51200000\nETRN\nSTARTTLS\nAUTH DIGEST-MD5 NTLM CRAM-MD5\nENHANCEDSTATUSCODES\n8BITMIME\nDSN\nSMTPUTF8\nCHUNKING'
A "flaky" test is one that exhibits intermittent or sporadic failure, that seems to have non-deterministic behaviour. Sometimes it passes, sometimes it fails, and it's not clear why. This page discusses pytest features that can help and other general strategies for identifying, fixing or mitigating them.
Why flaky tests are a problem
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Flaky tests are particularly troublesome when a continuous integration (CI) server is being used, so that all tests must pass before a new code change can be merged. If the test result is not a reliable signal -- that a test failure means the code change broke the test -- developers can become mistrustful of the test results, which can lead to overlooking genuine failures. It is also a source of wasted time as developers must re-run test suites and investigate spurious failures.
Potential root causes
^^^^^^^^^^^^^^^^^^^^^
System state
~~~~~~~~~~~~
Broadly speaking, a flaky test indicates that the test relies on some system state that is not being appropriately controlled - the test environment is not sufficiently isolated. Higher level tests are more likely to be flaky as they rely on more state.
Flaky tests sometimes appear when a test suite is run in parallel (such as use of pytest-xdist). This can indicate a test is reliant on test ordering.
- Perhaps a different test is failing to clean up after itself and leaving behind data which causes the flaky test to fail.
- The flaky test is reliant on data from a previous test that doesn't clean up after itself, and in parallel runs that previous test is not always present
- Tests that modify global state typically cannot be run in parallel.
Overly strict assertion
~~~~~~~~~~~~~~~~~~~~~~~
Overly strict assertions can cause problems with floating point comparison as well as timing issues. `pytest.approx <https://docs.pytest.org/en/latest/reference.html#pytest-approx>`_ is useful here.
Pytest features
^^^^^^^^^^^^^^^
Xfail strict
~~~~~~~~~~~~
:ref:`pytest.mark.xfail ref` with ``strict=False`` can be used to mark a test so that its failure does not cause the whole build to break. This could be considered like a manual quarantine, and is rather dangerous to use permanently.
PYTEST_CURRENT_TEST
~~~~~~~~~~~~~~~~~~~
:ref:`pytest current test env` may be useful for figuring out "which test got stuck".
Plugins
~~~~~~~
Rerunning any failed tests can mitigate the negative effects of flaky tests by giving them additional chances to pass, so that the overall build does not fail. Several pytest plugins support this:
*`flaky <https://github.com/box/flaky>`_
*`pytest-flakefinder <https://github.com/dropbox/pytest-flakefinder>`_ - `blog post <https://blogs.dropbox.com/tech/2016/03/open-sourcing-pytest-tools/>`_
It can be common to split a single test suite into two, such as unit vs integration, and only use the unit test suite as a CI gate. This also helps keep build times manageable as high level tests tend to be slower. However, it means it does become possible for code that breaks the build to be merged, so extra vigilance is needed for monitoring the integration test results.
Video/screenshot on failure
~~~~~~~~~~~~~~~~~~~~~~~~~~~
For UI tests these are important for understanding what the state of the UI was when the test failed. pytest-splinter can be used with plugins like pytest-bdd and can `save a screenshot on test failure <https://pytest-splinter.readthedocs.io/en/latest/#automatic-screenshots-on-test-failure>`_, which can help to isolate the cause.
Delete or rewrite the test
~~~~~~~~~~~~~~~~~~~~~~~~~~
If the functionality is covered by other tests, perhaps the test can be removed. If not, perhaps it can be rewritten at a lower level which will remove the flakiness or make its source more apparent.
Quarantine
~~~~~~~~~~
Mark Lapierre discusses the `Pros and Cons of Quarantined Tests <https://dev.to/mlapierre/pros-and-cons-of-quarantined-tests-2emj>`_ in a post from 2018.
CI tools that rerun on failure
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Azure Pipelines (the Azure cloud CI/CD tool, formerly Visual Studio Team Services or VSTS) has a feature to `identify flaky tests <https://docs.microsoft.com/en-us/azure/devops/release-notes/2017/dec-11-vsts#identify-flaky-tests>`_ and rerun failed tests.
Research
^^^^^^^^
This is a limited list, please submit an issue or pull request to expand it!
* Gao, Zebao, Yalan Liang, Myra B. Cohen, Atif M. Memon, and Zhen Wang. "Making system user interactive tests repeatable: When and what should we control?." In *Software Engineering (ICSE), 2015 IEEE/ACM 37th IEEE International Conference on*, vol. 1, pp. 55-65. IEEE, 2015. `PDF <http://www.cs.umd.edu/~atif/pubs/gao-icse15.pdf>`__
* Palomba, Fabio, and Andy Zaidman. "Does refactoring of test smells induce fixing flaky tests?." In *Software Maintenance and Evolution (ICSME), 2017 IEEE International Conference on*, pp. 1-12. IEEE, 2017. `PDF in Google Drive <https://drive.google.com/file/d/10HdcCQiuQVgW3yYUJD-TSTq1NbYEprl0/view>`__
* Bell, Jonathan, Owolabi Legunsen, Michael Hilton, Lamyaa Eloussi, Tifany Yung, and Darko Marinov. "DeFlaker: Automatically detecting flaky tests." In *Proceedings of the 2018 International Conference on Software Engineering*. 2018. `PDF <https://www.jonbell.net/icse18-deflaker.pdf>`__
Resources
^^^^^^^^^
*`Eradicating Non-Determinism in Tests <https://martinfowler.com/articles/nonDeterminism.html>`_ by Martin Fowler, 2011
*`No more flaky tests on the Go team <https://www.thoughtworks.com/insights/blog/no-more-flaky-tests-go-team>`_ by Pavan Sudarshan, 2012
*`The Build That Cried Broken: Building Trust in your Continuous Integration Tests <https://www.youtube.com/embed/VotJqV4n8ig>`_ talk (video) by `Angie Jones <http://angiejones.tech/>`_ at SeleniumConf Austin 2017
*`Test and Code Podcast: Flaky Tests and How to Deal with Them <https://testandcode.com/50>`_ by Brian Okken and Anthony Shaw, 2018
* Microsoft:
*`How we approach testing VSTS to enable continuous delivery <https://blogs.msdn.microsoft.com/bharry/2017/06/28/testing-in-a-cloud-delivery-cadence/>`_ by Brian Harry MS, 2017
*`Eliminating Flaky Tests <https://docs.microsoft.com/en-us/azure/devops/learn/devops-at-microsoft/eliminating-flaky-tests>`_ blog and talk (video) by Munil Shah, 2017
* Google:
* `Flaky Tests at Google and How We Mitigate Them <https://testing.googleblog.com/2016/05/flaky-tests-at-google-and-how-we.html>`_ by John Micco, 2016
*`Where do Google's flaky tests come from? <https://docs.google.com/document/d/1mZ0-Kc97DI_F3tf_GBW_NB_aqka-P1jVOsFfufxqUUM/edit#heading=h.ec0r4fypsleh>`_ by Jeff Listfield, 2017
**Documentation as PDF**: `download latest <https://media.readthedocs.org/pdf/pytest/latest/pytest.pdf>`_
``pytest`` is a framework that makes building simple and scalable tests easy. Tests are expressive and readable—no boilerplate code required. Get started in minutes with a small unit test or complex functional test for your application or library.
@@ -20,14 +17,18 @@ Installation and Getting Started
Install ``pytest``
----------------------------------------
1. Run the following command in your command line::
1. Run the following command in your command line:
..code-block::bash
pip install -U pytest
2. Check that you installed the correct version::
2. Check that you installed the correct version:
..code-block::bash
$ pytest --version
This is pytest version 3.x.y, imported from $PYTHON_PREFIX/lib/python3.6/site-packages/pytest.py
This is pytest version 4.x.y, imported from $PYTHON_PREFIX/lib/python3.6/site-packages/pytest.py
.._`simpletest`:
@@ -43,12 +44,15 @@ Create a simple test function with just four lines of code::
def test_answer():
assert func(3) == 5
That’s it. You can now execute the test function::
That’s it. You can now execute the test function:
..code-block::pytest
$ pytest
=========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile:
platform linux -- Python 3.x.y, pytest-4.x.y, py-1.x.y, pluggy-0.x.y
cachedir: $PYTHON_PREFIX/.pytest_cache
rootdir: $REGENDOC_TMPDIR
collected 1 item
test_sample.py F [100%]
@@ -90,7 +94,9 @@ Use the ``raises`` helper to assert that some code raises an exception::
with pytest.raises(SystemExit):
f()
Execute the test function with “quiet” reporting mode::
Execute the test function with “quiet” reporting mode:
..code-block::pytest
$ pytest -q test_sysexit.py
. [100%]
@@ -111,7 +117,9 @@ Once you develop multiple tests, you may want to group them into a class. pytest
x = "hello"
assert hasattr(x, 'check')
``pytest`` discovers all tests following its :ref:`Conventions for Python test discovery <test discovery>`, so it finds both ``test_`` prefixed functions. There is no need to subclass anything. We can simply run the module by passing its filename::
``pytest`` discovers all tests following its :ref:`Conventions for Python test discovery <test discovery>`, so it finds both ``test_`` prefixed functions. There is no need to subclass anything. We can simply run the module by passing its filename:
..code-block::pytest
$ pytest -q test_class.py
.F [100%]
@@ -138,10 +146,12 @@ Request a unique temporary directory for functional tests
# content of test_tmpdir.py
def test_needsfiles(tmpdir):
print(tmpdir)
print(tmpdir)
assert 0
List the name ``tmpdir`` in the test function signature and ``pytest`` will lookup and call a fixture factory to create the resource before performing the test function call. Before the test runs, ``pytest`` creates a unique-per-test-invocation temporary directory::
List the name ``tmpdir`` in the test function signature and ``pytest`` will lookup and call a fixture factory to create the resource before performing the test function call. Before the test runs, ``pytest`` creates a unique-per-test-invocation temporary directory:
..code-block::pytest
$ pytest -q test_tmpdir.py
F [100%]
@@ -151,7 +161,7 @@ List the name ``tmpdir`` in the test function signature and ``pytest`` will look
tmpdir = local('PYTEST_TMPDIR/test_needsfiles0')
def test_needsfiles(tmpdir):
print(tmpdir)
print(tmpdir)
> assert 0
E assert 0
@@ -162,7 +172,9 @@ List the name ``tmpdir`` in the test function signature and ``pytest`` will look
More info on tmpdir handling is available at :ref:`Temporary directories and files <tmpdir handling>`.
Find out what kind of builtin :ref:`pytest fixtures <fixtures>` exist with the command::
Find out what kind of builtin :ref:`pytest fixtures <fixtures>` exist with the command:
..code-block::bash
pytest --fixtures # shows builtin and custom fixtures
==================== 1 failed, 2 passed in 0.12 seconds ====================
..note::
pytest by default escapes any non-ascii characters used in unicode strings
for the parametrization because it has several downsides.
If however you would like to use unicode strings in parametrization and see them in the terminal as is (non-escaped), use this option in your ``pytest.ini``:
upstream support `ending in 2020 <https://legacy.python.org/dev/peps/pep-0373/#id4>`__.
Python 3.4's last release is scheduled for
`March 2019 <https://www.python.org/dev/peps/pep-0429/#release-schedule>`__. pytest is one of
the participating projects of the https://python3statement.org.
We plan to drop support for Python 2.7 and 3.4 at the same time with the release of **pytest 5.0**,
scheduled to be released by **mid-2019**. Thanks to the `python_requires <https://packaging.python.org/guides/distributing-packages-using-setuptools/#python-requires>`__``setuptools`` option,
Python 2.7 and Python 3.4 users using a modern ``pip`` version
will install the last compatible pytest ``4.X`` version automatically even if ``5.0`` or later
are available on PyPI.
During the period **from mid-2019 and 2020**, the pytest core team plans to make
bug-fix releases of the pytest ``4.X`` series by back-porting patches to the ``4.x-maintenance``
branch.
**After 2020**, the core team will no longer actively back port-patches, but the ``4.x-maintenance``
branch will continue to exist so the community itself can contribute patches. The
core team will be happy to accept those patches and make new ``4.X`` releases **until mid-2020**.
- pytest: recommendations, basic packages for testing in Python and Django, Andreu Vallbona, PyconES 2017 (`slides in english <http://talks.apsl.io/testing-pycones-2017/>`_, `video in spanish <https://www.youtube.com/watch?v=K20GeR-lXDk>`_)
- `pytest advanced, Andrew Svetlov (Russian, PyCon Russia, 2016)
<https://www.youtube.com/watch?v=7KgihdKTWY4>`_.
- `Pythonic testing, Igor Starikov (Russian, PyNsk, November 2016)
``pytest`` allows one to drop into the PDB_ prompt immediately at the start of each test via a command line option::
``pytest`` allows one to drop into the PDB_ prompt immediately at the start of each test via a command line option:
..code-block::bash
pytest --trace
@@ -196,10 +384,8 @@ in your code and pytest automatically disables its output capture for that test:
* Output capture in other tests is not affected.
* Any prior test output that has already been captured and will be processed as
such.
*Any later output produced within the same test will not be captured and will
instead get sent directly to ``sys.stdout``. Note that this holds true even
for test output occurring after you exit the interactive PDB_ tracing session
and continue with the regular test run.
*Output capture gets resumed when ending the debugger session (via the
``continue`` command).
.._`breakpoint-builtin`:
@@ -212,8 +398,8 @@ Pytest supports the use of ``breakpoint()`` with the following behaviours:
- When ``breakpoint()`` is called and ``PYTHONBREAKPOINT`` is set to the default value, pytest will use the custom internal PDB trace UI instead of the system default ``Pdb``.
- When tests are complete, the system will default back to the system ``Pdb`` trace UI.
-If``--pdb``is called on execution of pytest, the custom internal Pdb trace UI is used on``bothbreakpoint()`` and failed tests/unhandled exceptions.
- If``--pdbcls``is used, the custom class debugger will be executed when a test fails (as expected within existing behaviour), but also when ``breakpoint()`` is called from within a test, the custom class debugger will be instantiated.
-With``--pdb``passed to pytest, the custom internal Pdb trace UI is used with both``breakpoint()`` and failed tests/unhandled exceptions.
-``--pdbcls``can be used to specify a custom debugger class.
.._durations:
@@ -222,16 +408,21 @@ Profiling test execution duration
.. versionadded: 2.2
To get a list of the slowest 10 test durations::
To get a list of the slowest 10 test durations:
..code-block::bash
pytest --durations=10
By default, pytest will not show test durations that are too small (<0.01s) unless ``-vv`` is passed on the command-line.
These warnings might be filtered using the same builtin mechanisms used to filter other types of warnings.
Please read our :ref:`backwards-compatibility` to learn how we proceed about deprecating and eventually removing
features.
The following warning types ares used by pytest and are part of the public API:
..autoclass:: pytest.PytestWarning
..autoclass:: pytest.PytestDeprecationWarning
..autoclass:: pytest.RemovedInPytest4Warning
..autoclass:: pytest.PytestExperimentalApiWarning
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.