apply most other hooks and opt out of black reformating
This commit is contained in:
parent
b60376dc28
commit
86fc31db8d
|
@ -1,10 +1,10 @@
|
||||||
|
exclude: doc/en/example/py2py3/test_py2.py
|
||||||
repos:
|
repos:
|
||||||
- repo: https://github.com/ambv/black
|
- repo: https://github.com/ambv/black
|
||||||
rev: 18.4a4
|
rev: 18.4a4
|
||||||
hooks:
|
hooks:
|
||||||
- id: black
|
- id: black
|
||||||
args: [--safe, --quiet]
|
args: [--safe, --quiet, --check]
|
||||||
python_version: python3.6
|
python_version: python3.6
|
||||||
- repo: https://github.com/pre-commit/pre-commit-hooks
|
- repo: https://github.com/pre-commit/pre-commit-hooks
|
||||||
rev: v1.2.3
|
rev: v1.2.3
|
||||||
|
|
|
@ -139,7 +139,7 @@ Here's a rundown of how a repository transfer usually proceeds
|
||||||
* ``joedoe`` transfers repository ownership to ``pytest-dev`` administrator ``calvin``.
|
* ``joedoe`` transfers repository ownership to ``pytest-dev`` administrator ``calvin``.
|
||||||
* ``calvin`` creates ``pytest-xyz-admin`` and ``pytest-xyz-developers`` teams, inviting ``joedoe`` to both as **maintainer**.
|
* ``calvin`` creates ``pytest-xyz-admin`` and ``pytest-xyz-developers`` teams, inviting ``joedoe`` to both as **maintainer**.
|
||||||
* ``calvin`` transfers repository to ``pytest-dev`` and configures team access:
|
* ``calvin`` transfers repository to ``pytest-dev`` and configures team access:
|
||||||
|
|
||||||
- ``pytest-xyz-admin`` **admin** access;
|
- ``pytest-xyz-admin`` **admin** access;
|
||||||
- ``pytest-xyz-developers`` **write** access;
|
- ``pytest-xyz-developers`` **write** access;
|
||||||
|
|
||||||
|
@ -203,15 +203,15 @@ Here is a simple overview, with pytest-specific bits:
|
||||||
$ git clone git@github.com:YOUR_GITHUB_USERNAME/pytest.git
|
$ git clone git@github.com:YOUR_GITHUB_USERNAME/pytest.git
|
||||||
$ cd pytest
|
$ cd pytest
|
||||||
# now, to fix a bug create your own branch off "master":
|
# now, to fix a bug create your own branch off "master":
|
||||||
|
|
||||||
$ git checkout -b your-bugfix-branch-name master
|
$ git checkout -b your-bugfix-branch-name master
|
||||||
|
|
||||||
# or to instead add a feature create your own branch off "features":
|
# or to instead add a feature create your own branch off "features":
|
||||||
|
|
||||||
$ git checkout -b your-feature-branch-name features
|
$ git checkout -b your-feature-branch-name features
|
||||||
|
|
||||||
Given we have "major.minor.micro" version numbers, bugfixes will usually
|
Given we have "major.minor.micro" version numbers, bugfixes will usually
|
||||||
be released in micro releases whereas features will be released in
|
be released in micro releases whereas features will be released in
|
||||||
minor releases and incompatible changes in major releases.
|
minor releases and incompatible changes in major releases.
|
||||||
|
|
||||||
If you need some help with Git, follow this quick start
|
If you need some help with Git, follow this quick start
|
||||||
|
|
|
@ -4,7 +4,7 @@ text that will be added to the next ``CHANGELOG``.
|
||||||
The ``CHANGELOG`` will be read by users, so this description should be aimed to pytest users
|
The ``CHANGELOG`` will be read by users, so this description should be aimed to pytest users
|
||||||
instead of describing internal changes which are only relevant to the developers.
|
instead of describing internal changes which are only relevant to the developers.
|
||||||
|
|
||||||
Make sure to use full sentences with correct case and punctuation, for example::
|
Make sure to use full sentences with correct case and punctuation, for example::
|
||||||
|
|
||||||
Fix issue with non-ascii messages from the ``warnings`` module.
|
Fix issue with non-ascii messages from the ``warnings`` module.
|
||||||
|
|
||||||
|
|
|
@ -6,4 +6,4 @@ pygments_style = flask_theme_support.FlaskyStyle
|
||||||
[options]
|
[options]
|
||||||
index_logo = ''
|
index_logo = ''
|
||||||
index_logo_height = 120px
|
index_logo_height = 120px
|
||||||
touch_icon =
|
touch_icon =
|
||||||
|
|
|
@ -5,7 +5,7 @@ Release announcements
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 2
|
:maxdepth: 2
|
||||||
|
|
||||||
|
|
||||||
release-3.6.0
|
release-3.6.0
|
||||||
release-3.5.1
|
release-3.5.1
|
||||||
release-3.5.0
|
release-3.5.0
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
py.test 2.0.3: bug fixes and speed ups
|
py.test 2.0.3: bug fixes and speed ups
|
||||||
===========================================================================
|
===========================================================================
|
||||||
|
|
||||||
Welcome to pytest-2.0.3, a maintenance and bug fix release of pytest,
|
Welcome to pytest-2.0.3, a maintenance and bug fix release of pytest,
|
||||||
|
|
|
@ -9,7 +9,7 @@ and integration testing. See extensive docs with examples here:
|
||||||
|
|
||||||
The release contains another fix to the perfected assertions introduced
|
The release contains another fix to the perfected assertions introduced
|
||||||
with the 2.1 series as well as the new possibility to customize reporting
|
with the 2.1 series as well as the new possibility to customize reporting
|
||||||
for assertion expressions on a per-directory level.
|
for assertion expressions on a per-directory level.
|
||||||
|
|
||||||
If you want to install or upgrade pytest, just type one of::
|
If you want to install or upgrade pytest, just type one of::
|
||||||
|
|
||||||
|
|
|
@ -27,7 +27,7 @@ Changes between 2.2.0 and 2.2.1
|
||||||
----------------------------------------
|
----------------------------------------
|
||||||
|
|
||||||
- fix issue99 (in pytest and py) internallerrors with resultlog now
|
- fix issue99 (in pytest and py) internallerrors with resultlog now
|
||||||
produce better output - fixed by normalizing pytest_internalerror
|
produce better output - fixed by normalizing pytest_internalerror
|
||||||
input arguments.
|
input arguments.
|
||||||
- fix issue97 / traceback issues (in pytest and py) improve traceback output
|
- fix issue97 / traceback issues (in pytest and py) improve traceback output
|
||||||
in conjunction with jinja2 and cython which hack tracebacks
|
in conjunction with jinja2 and cython which hack tracebacks
|
||||||
|
@ -35,7 +35,7 @@ Changes between 2.2.0 and 2.2.1
|
||||||
the final test in a test node will now run its teardown directly
|
the final test in a test node will now run its teardown directly
|
||||||
instead of waiting for the end of the session. Thanks Dave Hunt for
|
instead of waiting for the end of the session. Thanks Dave Hunt for
|
||||||
the good reporting and feedback. The pytest_runtest_protocol as well
|
the good reporting and feedback. The pytest_runtest_protocol as well
|
||||||
as the pytest_runtest_teardown hooks now have "nextitem" available
|
as the pytest_runtest_teardown hooks now have "nextitem" available
|
||||||
which will be None indicating the end of the test run.
|
which will be None indicating the end of the test run.
|
||||||
- fix collection crash due to unknown-source collected items, thanks
|
- fix collection crash due to unknown-source collected items, thanks
|
||||||
to Ralf Schmitt (fixed by depending on a more recent pylib)
|
to Ralf Schmitt (fixed by depending on a more recent pylib)
|
||||||
|
|
|
@ -4,7 +4,7 @@ pytest-2.2.2: bug fixes
|
||||||
pytest-2.2.2 (updated to 2.2.3 to fix packaging issues) is a minor
|
pytest-2.2.2 (updated to 2.2.3 to fix packaging issues) is a minor
|
||||||
backward-compatible release of the versatile py.test testing tool. It
|
backward-compatible release of the versatile py.test testing tool. It
|
||||||
contains bug fixes and a few refinements particularly to reporting with
|
contains bug fixes and a few refinements particularly to reporting with
|
||||||
"--collectonly", see below for betails.
|
"--collectonly", see below for betails.
|
||||||
|
|
||||||
For general information see here:
|
For general information see here:
|
||||||
|
|
||||||
|
@ -27,7 +27,7 @@ Changes between 2.2.1 and 2.2.2
|
||||||
|
|
||||||
- fix issue101: wrong args to unittest.TestCase test function now
|
- fix issue101: wrong args to unittest.TestCase test function now
|
||||||
produce better output
|
produce better output
|
||||||
- fix issue102: report more useful errors and hints for when a
|
- fix issue102: report more useful errors and hints for when a
|
||||||
test directory was renamed and some pyc/__pycache__ remain
|
test directory was renamed and some pyc/__pycache__ remain
|
||||||
- fix issue106: allow parametrize to be applied multiple times
|
- fix issue106: allow parametrize to be applied multiple times
|
||||||
e.g. from module, class and at function level.
|
e.g. from module, class and at function level.
|
||||||
|
@ -38,6 +38,6 @@ Changes between 2.2.1 and 2.2.2
|
||||||
- fix issue115: make --collectonly robust against early failure
|
- fix issue115: make --collectonly robust against early failure
|
||||||
(missing files/directories)
|
(missing files/directories)
|
||||||
- "-qq --collectonly" now shows only files and the number of tests in them
|
- "-qq --collectonly" now shows only files and the number of tests in them
|
||||||
- "-q --collectonly" now shows test ids
|
- "-q --collectonly" now shows test ids
|
||||||
- allow adding of attributes to test reports such that it also works
|
- allow adding of attributes to test reports such that it also works
|
||||||
with distributed testing (no upgrade of pytest-xdist needed)
|
with distributed testing (no upgrade of pytest-xdist needed)
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
pytest-2.3: improved fixtures / better unittest integration
|
pytest-2.3: improved fixtures / better unittest integration
|
||||||
=============================================================================
|
=============================================================================
|
||||||
|
|
||||||
pytest-2.3 comes with many major improvements for fixture/funcarg management
|
pytest-2.3 comes with many major improvements for fixture/funcarg management
|
||||||
and parametrized testing in Python. It is now easier, more efficient and
|
and parametrized testing in Python. It is now easier, more efficient and
|
||||||
more predicatable to re-run the same tests with different fixture
|
more predicatable to re-run the same tests with different fixture
|
||||||
instances. Also, you can directly declare the caching "scope" of
|
instances. Also, you can directly declare the caching "scope" of
|
||||||
|
@ -9,7 +9,7 @@ fixtures so that dependent tests throughout your whole test suite can
|
||||||
re-use database or other expensive fixture objects with ease. Lastly,
|
re-use database or other expensive fixture objects with ease. Lastly,
|
||||||
it's possible for fixture functions (formerly known as funcarg
|
it's possible for fixture functions (formerly known as funcarg
|
||||||
factories) to use other fixtures, allowing for a completely modular and
|
factories) to use other fixtures, allowing for a completely modular and
|
||||||
re-useable fixture design.
|
re-useable fixture design.
|
||||||
|
|
||||||
For detailed info and tutorial-style examples, see:
|
For detailed info and tutorial-style examples, see:
|
||||||
|
|
||||||
|
@ -27,7 +27,7 @@ All changes are backward compatible and you should be able to continue
|
||||||
to run your test suites and 3rd party plugins that worked with
|
to run your test suites and 3rd party plugins that worked with
|
||||||
pytest-2.2.4.
|
pytest-2.2.4.
|
||||||
|
|
||||||
If you are interested in the precise reasoning (including examples) of the
|
If you are interested in the precise reasoning (including examples) of the
|
||||||
pytest-2.3 fixture evolution, please consult
|
pytest-2.3 fixture evolution, please consult
|
||||||
http://pytest.org/latest/funcarg_compare.html
|
http://pytest.org/latest/funcarg_compare.html
|
||||||
|
|
||||||
|
@ -43,7 +43,7 @@ and more details for those already in the knowing of pytest can be found
|
||||||
in the CHANGELOG below.
|
in the CHANGELOG below.
|
||||||
|
|
||||||
Particular thanks for this release go to Floris Bruynooghe, Alex Okrushko
|
Particular thanks for this release go to Floris Bruynooghe, Alex Okrushko
|
||||||
Carl Meyer, Ronny Pfannschmidt, Benjamin Peterson and Alex Gaynor for helping
|
Carl Meyer, Ronny Pfannschmidt, Benjamin Peterson and Alex Gaynor for helping
|
||||||
to get the new features right and well integrated. Ronny and Floris
|
to get the new features right and well integrated. Ronny and Floris
|
||||||
also helped to fix a number of bugs and yet more people helped by
|
also helped to fix a number of bugs and yet more people helped by
|
||||||
providing bug reports.
|
providing bug reports.
|
||||||
|
@ -94,7 +94,7 @@ Changes between 2.2.4 and 2.3.0
|
||||||
- pluginmanager.register(...) now raises ValueError if the
|
- pluginmanager.register(...) now raises ValueError if the
|
||||||
plugin has been already registered or the name is taken
|
plugin has been already registered or the name is taken
|
||||||
|
|
||||||
- fix issue159: improve http://pytest.org/latest/faq.html
|
- fix issue159: improve http://pytest.org/latest/faq.html
|
||||||
especially with respect to the "magic" history, also mention
|
especially with respect to the "magic" history, also mention
|
||||||
pytest-django, trial and unittest integration.
|
pytest-django, trial and unittest integration.
|
||||||
|
|
||||||
|
@ -125,7 +125,7 @@ Changes between 2.2.4 and 2.3.0
|
||||||
you can use startdir.bestrelpath(yourpath) to show
|
you can use startdir.bestrelpath(yourpath) to show
|
||||||
nice relative path
|
nice relative path
|
||||||
|
|
||||||
- allow plugins to implement both pytest_report_header and
|
- allow plugins to implement both pytest_report_header and
|
||||||
pytest_sessionstart (sessionstart is invoked first).
|
pytest_sessionstart (sessionstart is invoked first).
|
||||||
|
|
||||||
- don't show deselected reason line if there is none
|
- don't show deselected reason line if there is none
|
||||||
|
|
|
@ -3,16 +3,16 @@ pytest-2.3.1: fix regression with factory functions
|
||||||
|
|
||||||
pytest-2.3.1 is a quick follow-up release:
|
pytest-2.3.1 is a quick follow-up release:
|
||||||
|
|
||||||
- fix issue202 - regression with fixture functions/funcarg factories:
|
- fix issue202 - regression with fixture functions/funcarg factories:
|
||||||
using "self" is now safe again and works as in 2.2.4. Thanks
|
using "self" is now safe again and works as in 2.2.4. Thanks
|
||||||
to Eduard Schettino for the quick bug report.
|
to Eduard Schettino for the quick bug report.
|
||||||
|
|
||||||
- disable pexpect pytest self tests on Freebsd - thanks Koob for the
|
- disable pexpect pytest self tests on Freebsd - thanks Koob for the
|
||||||
quick reporting
|
quick reporting
|
||||||
|
|
||||||
- fix/improve interactive docs with --markers
|
- fix/improve interactive docs with --markers
|
||||||
|
|
||||||
See
|
See
|
||||||
|
|
||||||
http://pytest.org/
|
http://pytest.org/
|
||||||
|
|
||||||
|
|
|
@ -8,9 +8,9 @@ pytest-2.3.2 is another stabilization release:
|
||||||
- fix teardown-ordering for parametrized setups
|
- fix teardown-ordering for parametrized setups
|
||||||
- fix unittest and trial compat behaviour with respect to runTest() methods
|
- fix unittest and trial compat behaviour with respect to runTest() methods
|
||||||
- issue 206 and others: some improvements to packaging
|
- issue 206 and others: some improvements to packaging
|
||||||
- fix issue127 and others: improve some docs
|
- fix issue127 and others: improve some docs
|
||||||
|
|
||||||
See
|
See
|
||||||
|
|
||||||
http://pytest.org/
|
http://pytest.org/
|
||||||
|
|
||||||
|
@ -26,7 +26,7 @@ holger krekel
|
||||||
Changes between 2.3.1 and 2.3.2
|
Changes between 2.3.1 and 2.3.2
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
|
||||||
- fix issue208 and fix issue29 use new py version to avoid long pauses
|
- fix issue208 and fix issue29 use new py version to avoid long pauses
|
||||||
when printing tracebacks in long modules
|
when printing tracebacks in long modules
|
||||||
|
|
||||||
- fix issue205 - conftests in subdirs customizing
|
- fix issue205 - conftests in subdirs customizing
|
||||||
|
|
|
@ -6,7 +6,7 @@ which offers uebersimple assertions, scalable fixture mechanisms
|
||||||
and deep customization for testing with Python. Particularly,
|
and deep customization for testing with Python. Particularly,
|
||||||
this release provides:
|
this release provides:
|
||||||
|
|
||||||
- integration fixes and improvements related to flask, numpy, nose,
|
- integration fixes and improvements related to flask, numpy, nose,
|
||||||
unittest, mock
|
unittest, mock
|
||||||
|
|
||||||
- makes pytest work on py24 again (yes, people sometimes still need to use it)
|
- makes pytest work on py24 again (yes, people sometimes still need to use it)
|
||||||
|
@ -16,7 +16,7 @@ this release provides:
|
||||||
Thanks to Manuel Jacob, Thomas Waldmann, Ronny Pfannschmidt, Pavel Repin
|
Thanks to Manuel Jacob, Thomas Waldmann, Ronny Pfannschmidt, Pavel Repin
|
||||||
and Andreas Taumoefolau for providing patches and all for the issues.
|
and Andreas Taumoefolau for providing patches and all for the issues.
|
||||||
|
|
||||||
See
|
See
|
||||||
|
|
||||||
http://pytest.org/
|
http://pytest.org/
|
||||||
|
|
||||||
|
|
|
@ -10,10 +10,10 @@ comes with the following fixes and features:
|
||||||
can write: -k "name1 or name2" etc. This is a slight usage incompatibility
|
can write: -k "name1 or name2" etc. This is a slight usage incompatibility
|
||||||
if you used special syntax like "TestClass.test_method" which you now
|
if you used special syntax like "TestClass.test_method" which you now
|
||||||
need to write as -k "TestClass and test_method" to match a certain
|
need to write as -k "TestClass and test_method" to match a certain
|
||||||
method in a certain test class.
|
method in a certain test class.
|
||||||
- allow to dynamically define markers via
|
- allow to dynamically define markers via
|
||||||
item.keywords[...]=assignment integrating with "-m" option
|
item.keywords[...]=assignment integrating with "-m" option
|
||||||
- yielded test functions will now have autouse-fixtures active but
|
- yielded test functions will now have autouse-fixtures active but
|
||||||
cannot accept fixtures as funcargs - it's anyway recommended to
|
cannot accept fixtures as funcargs - it's anyway recommended to
|
||||||
rather use the post-2.0 parametrize features instead of yield, see:
|
rather use the post-2.0 parametrize features instead of yield, see:
|
||||||
http://pytest.org/latest/example/parametrize.html
|
http://pytest.org/latest/example/parametrize.html
|
||||||
|
@ -26,7 +26,7 @@ comes with the following fixes and features:
|
||||||
|
|
||||||
Thanks in particular to Thomas Waldmann for spotting and reporting issues.
|
Thanks in particular to Thomas Waldmann for spotting and reporting issues.
|
||||||
|
|
||||||
See
|
See
|
||||||
|
|
||||||
http://pytest.org/
|
http://pytest.org/
|
||||||
|
|
||||||
|
|
|
@ -13,7 +13,7 @@ few interesting new plugins saw the light last month:
|
||||||
- pytest-random: randomize test ordering
|
- pytest-random: randomize test ordering
|
||||||
|
|
||||||
And several others like pytest-django saw maintenance releases.
|
And several others like pytest-django saw maintenance releases.
|
||||||
For a more complete list, check out
|
For a more complete list, check out
|
||||||
https://pypi.org/search/?q=pytest
|
https://pypi.org/search/?q=pytest
|
||||||
|
|
||||||
For general information see:
|
For general information see:
|
||||||
|
@ -81,7 +81,7 @@ Changes between 2.3.4 and 2.3.5
|
||||||
- fix bug where using capsys with pytest.set_trace() in a test
|
- fix bug where using capsys with pytest.set_trace() in a test
|
||||||
function would break when looking at capsys.readouterr()
|
function would break when looking at capsys.readouterr()
|
||||||
|
|
||||||
- allow to specify prefixes starting with "_" when
|
- allow to specify prefixes starting with "_" when
|
||||||
customizing python_functions test discovery. (thanks Graham Horler)
|
customizing python_functions test discovery. (thanks Graham Horler)
|
||||||
|
|
||||||
- improve PYTEST_DEBUG tracing output by putting
|
- improve PYTEST_DEBUG tracing output by putting
|
||||||
|
|
|
@ -1,9 +1,9 @@
|
||||||
pytest-2.4.0: new fixture features/hooks and bug fixes
|
pytest-2.4.0: new fixture features/hooks and bug fixes
|
||||||
===========================================================================
|
===========================================================================
|
||||||
|
|
||||||
The just released pytest-2.4.0 brings many improvements and numerous
|
The just released pytest-2.4.0 brings many improvements and numerous
|
||||||
bug fixes while remaining plugin- and test-suite compatible apart
|
bug fixes while remaining plugin- and test-suite compatible apart
|
||||||
from a few supposedly very minor incompatibilities. See below for
|
from a few supposedly very minor incompatibilities. See below for
|
||||||
a full list of details. A few feature highlights:
|
a full list of details. A few feature highlights:
|
||||||
|
|
||||||
- new yield-style fixtures `pytest.yield_fixture
|
- new yield-style fixtures `pytest.yield_fixture
|
||||||
|
@ -13,7 +13,7 @@ a full list of details. A few feature highlights:
|
||||||
- improved pdb support: ``import pdb ; pdb.set_trace()`` now works
|
- improved pdb support: ``import pdb ; pdb.set_trace()`` now works
|
||||||
without requiring prior disabling of stdout/stderr capturing.
|
without requiring prior disabling of stdout/stderr capturing.
|
||||||
Also the ``--pdb`` options works now on collection and internal errors
|
Also the ``--pdb`` options works now on collection and internal errors
|
||||||
and we introduced a new experimental hook for IDEs/plugins to
|
and we introduced a new experimental hook for IDEs/plugins to
|
||||||
intercept debugging: ``pytest_exception_interact(node, call, report)``.
|
intercept debugging: ``pytest_exception_interact(node, call, report)``.
|
||||||
|
|
||||||
- shorter monkeypatch variant to allow specifying an import path as
|
- shorter monkeypatch variant to allow specifying an import path as
|
||||||
|
@ -23,7 +23,7 @@ a full list of details. A few feature highlights:
|
||||||
called if the corresponding setup method succeeded.
|
called if the corresponding setup method succeeded.
|
||||||
|
|
||||||
- integrate tab-completion on command line options if you
|
- integrate tab-completion on command line options if you
|
||||||
have `argcomplete <https://pypi.org/project/argcomplete/>`_
|
have `argcomplete <http://pypi.python.org/pypi/argcomplete>`_
|
||||||
configured.
|
configured.
|
||||||
|
|
||||||
- allow boolean expression directly with skipif/xfail
|
- allow boolean expression directly with skipif/xfail
|
||||||
|
@ -36,8 +36,8 @@ a full list of details. A few feature highlights:
|
||||||
- reporting: color the last line red or green depending if
|
- reporting: color the last line red or green depending if
|
||||||
failures/errors occurred or everything passed.
|
failures/errors occurred or everything passed.
|
||||||
|
|
||||||
The documentation has been updated to accommodate the changes,
|
The documentation has been updated to accommodate the changes,
|
||||||
see `http://pytest.org <http://pytest.org>`_
|
see `http://pytest.org <http://pytest.org>`_
|
||||||
|
|
||||||
To install or upgrade pytest::
|
To install or upgrade pytest::
|
||||||
|
|
||||||
|
@ -45,8 +45,8 @@ To install or upgrade pytest::
|
||||||
easy_install -U pytest
|
easy_install -U pytest
|
||||||
|
|
||||||
|
|
||||||
**Many thanks to all who helped, including Floris Bruynooghe,
|
**Many thanks to all who helped, including Floris Bruynooghe,
|
||||||
Brianna Laugher, Andreas Pelme, Anthon van der Neut, Anatoly Bubenkoff,
|
Brianna Laugher, Andreas Pelme, Anthon van der Neut, Anatoly Bubenkoff,
|
||||||
Vladimir Keleshev, Mathieu Agopian, Ronny Pfannschmidt, Christian
|
Vladimir Keleshev, Mathieu Agopian, Ronny Pfannschmidt, Christian
|
||||||
Theunert and many others.**
|
Theunert and many others.**
|
||||||
|
|
||||||
|
@ -101,12 +101,12 @@ new features:
|
||||||
- make "import pdb ; pdb.set_trace()" work natively wrt capturing (no
|
- make "import pdb ; pdb.set_trace()" work natively wrt capturing (no
|
||||||
"-s" needed anymore), making ``pytest.set_trace()`` a mere shortcut.
|
"-s" needed anymore), making ``pytest.set_trace()`` a mere shortcut.
|
||||||
|
|
||||||
- fix issue181: --pdb now also works on collect errors (and
|
- fix issue181: --pdb now also works on collect errors (and
|
||||||
on internal errors) . This was implemented by a slight internal
|
on internal errors) . This was implemented by a slight internal
|
||||||
refactoring and the introduction of a new hook
|
refactoring and the introduction of a new hook
|
||||||
``pytest_exception_interact`` hook (see next item).
|
``pytest_exception_interact`` hook (see next item).
|
||||||
|
|
||||||
- fix issue341: introduce new experimental hook for IDEs/terminals to
|
- fix issue341: introduce new experimental hook for IDEs/terminals to
|
||||||
intercept debugging: ``pytest_exception_interact(node, call, report)``.
|
intercept debugging: ``pytest_exception_interact(node, call, report)``.
|
||||||
|
|
||||||
- new monkeypatch.setattr() variant to provide a shorter
|
- new monkeypatch.setattr() variant to provide a shorter
|
||||||
|
@ -124,7 +124,7 @@ new features:
|
||||||
phase of a node.
|
phase of a node.
|
||||||
|
|
||||||
- simplify pytest.mark.parametrize() signature: allow to pass a
|
- simplify pytest.mark.parametrize() signature: allow to pass a
|
||||||
CSV-separated string to specify argnames. For example:
|
CSV-separated string to specify argnames. For example:
|
||||||
``pytest.mark.parametrize("input,expected", [(1,2), (2,3)])``
|
``pytest.mark.parametrize("input,expected", [(1,2), (2,3)])``
|
||||||
works as well as the previous:
|
works as well as the previous:
|
||||||
``pytest.mark.parametrize(("input", "expected"), ...)``.
|
``pytest.mark.parametrize(("input", "expected"), ...)``.
|
||||||
|
@ -149,10 +149,10 @@ new features:
|
||||||
|
|
||||||
Bug fixes:
|
Bug fixes:
|
||||||
|
|
||||||
- fix issue358 - capturing options are now parsed more properly
|
- fix issue358 - capturing options are now parsed more properly
|
||||||
by using a new parser.parse_known_args method.
|
by using a new parser.parse_known_args method.
|
||||||
|
|
||||||
- pytest now uses argparse instead of optparse (thanks Anthon) which
|
- pytest now uses argparse instead of optparse (thanks Anthon) which
|
||||||
means that "argparse" is added as a dependency if installing into python2.6
|
means that "argparse" is added as a dependency if installing into python2.6
|
||||||
environments or below.
|
environments or below.
|
||||||
|
|
||||||
|
@ -193,7 +193,7 @@ Bug fixes:
|
||||||
- fix issue323 - sorting of many module-scoped arg parametrizations
|
- fix issue323 - sorting of many module-scoped arg parametrizations
|
||||||
|
|
||||||
- make sessionfinish hooks execute with the same cwd-context as at
|
- make sessionfinish hooks execute with the same cwd-context as at
|
||||||
session start (helps fix plugin behaviour which write output files
|
session start (helps fix plugin behaviour which write output files
|
||||||
with relative path such as pytest-cov)
|
with relative path such as pytest-cov)
|
||||||
|
|
||||||
- fix issue316 - properly reference collection hooks in docs
|
- fix issue316 - properly reference collection hooks in docs
|
||||||
|
@ -201,7 +201,7 @@ Bug fixes:
|
||||||
- fix issue 306 - cleanup of -k/-m options to only match markers/test
|
- fix issue 306 - cleanup of -k/-m options to only match markers/test
|
||||||
names/keywords respectively. Thanks Wouter van Ackooy.
|
names/keywords respectively. Thanks Wouter van Ackooy.
|
||||||
|
|
||||||
- improved doctest counting for doctests in python modules --
|
- improved doctest counting for doctests in python modules --
|
||||||
files without any doctest items will not show up anymore
|
files without any doctest items will not show up anymore
|
||||||
and doctest examples are counted as separate test items.
|
and doctest examples are counted as separate test items.
|
||||||
thanks Danilo Bellini.
|
thanks Danilo Bellini.
|
||||||
|
@ -211,7 +211,7 @@ Bug fixes:
|
||||||
mode. Thanks Jason R. Coombs.
|
mode. Thanks Jason R. Coombs.
|
||||||
|
|
||||||
- fix junitxml generation when test output contains control characters,
|
- fix junitxml generation when test output contains control characters,
|
||||||
addressing issue267, thanks Jaap Broekhuizen
|
addressing issue267, thanks Jaap Broekhuizen
|
||||||
|
|
||||||
- fix issue338: honor --tb style for setup/teardown errors as well. Thanks Maho.
|
- fix issue338: honor --tb style for setup/teardown errors as well. Thanks Maho.
|
||||||
|
|
||||||
|
@ -220,5 +220,5 @@ Bug fixes:
|
||||||
- better parametrize error messages, thanks Brianna Laugher
|
- better parametrize error messages, thanks Brianna Laugher
|
||||||
|
|
||||||
- pytest_terminal_summary(terminalreporter) hooks can now use
|
- pytest_terminal_summary(terminalreporter) hooks can now use
|
||||||
".section(title)" and ".line(msg)" methods to print extra
|
".section(title)" and ".line(msg)" methods to print extra
|
||||||
information at the end of a test run.
|
information at the end of a test run.
|
||||||
|
|
|
@ -8,7 +8,7 @@ compared to 2.3.5 before they hit more people:
|
||||||
"type" keyword should also be converted to the respective types.
|
"type" keyword should also be converted to the respective types.
|
||||||
thanks Floris Bruynooghe, @dnozay. (fixes issue360 and issue362)
|
thanks Floris Bruynooghe, @dnozay. (fixes issue360 and issue362)
|
||||||
|
|
||||||
- fix dotted filename completion when using argcomplete
|
- fix dotted filename completion when using argcomplete
|
||||||
thanks Anthon van der Neuth. (fixes issue361)
|
thanks Anthon van der Neuth. (fixes issue361)
|
||||||
|
|
||||||
- fix regression when a 1-tuple ("arg",) is used for specifying
|
- fix regression when a 1-tuple ("arg",) is used for specifying
|
||||||
|
|
|
@ -26,9 +26,9 @@ pytest-2.4.2 is another bug-fixing release:
|
||||||
|
|
||||||
- remove attempt to "dup" stdout at startup as it's icky.
|
- remove attempt to "dup" stdout at startup as it's icky.
|
||||||
the normal capturing should catch enough possibilities
|
the normal capturing should catch enough possibilities
|
||||||
of tests messing up standard FDs.
|
of tests messing up standard FDs.
|
||||||
|
|
||||||
- add pluginmanager.do_configure(config) as a link to
|
- add pluginmanager.do_configure(config) as a link to
|
||||||
config.do_configure() for plugin-compatibility
|
config.do_configure() for plugin-compatibility
|
||||||
|
|
||||||
as usual, docs at http://pytest.org and upgrades via::
|
as usual, docs at http://pytest.org and upgrades via::
|
||||||
|
|
|
@ -4,7 +4,7 @@ pytest-2.5.0: now down to ZERO reported bugs!
|
||||||
pytest-2.5.0 is a big fixing release, the result of two community bug
|
pytest-2.5.0 is a big fixing release, the result of two community bug
|
||||||
fixing days plus numerous additional works from many people and
|
fixing days plus numerous additional works from many people and
|
||||||
reporters. The release should be fully compatible to 2.4.2, existing
|
reporters. The release should be fully compatible to 2.4.2, existing
|
||||||
plugins and test suites. We aim at maintaining this level of ZERO reported
|
plugins and test suites. We aim at maintaining this level of ZERO reported
|
||||||
bugs because it's no fun if your testing tool has bugs, is it? Under a
|
bugs because it's no fun if your testing tool has bugs, is it? Under a
|
||||||
condition, though: when submitting a bug report please provide
|
condition, though: when submitting a bug report please provide
|
||||||
clear information about the circumstances and a simple example which
|
clear information about the circumstances and a simple example which
|
||||||
|
@ -17,12 +17,12 @@ help.
|
||||||
For those who use older Python versions, please note that pytest is not
|
For those who use older Python versions, please note that pytest is not
|
||||||
automatically tested on python2.5 due to virtualenv, setuptools and tox
|
automatically tested on python2.5 due to virtualenv, setuptools and tox
|
||||||
not supporting it anymore. Manual verification shows that it mostly
|
not supporting it anymore. Manual verification shows that it mostly
|
||||||
works fine but it's not going to be part of the automated release
|
works fine but it's not going to be part of the automated release
|
||||||
process and thus likely to break in the future.
|
process and thus likely to break in the future.
|
||||||
|
|
||||||
As usual, current docs are at
|
As usual, current docs are at
|
||||||
|
|
||||||
http://pytest.org
|
http://pytest.org
|
||||||
|
|
||||||
and you can upgrade from pypi via::
|
and you can upgrade from pypi via::
|
||||||
|
|
||||||
|
@ -40,28 +40,28 @@ holger krekel
|
||||||
2.5.0
|
2.5.0
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
|
||||||
- dropped python2.5 from automated release testing of pytest itself
|
- dropped python2.5 from automated release testing of pytest itself
|
||||||
which means it's probably going to break soon (but still works
|
which means it's probably going to break soon (but still works
|
||||||
with this release we believe).
|
with this release we believe).
|
||||||
|
|
||||||
- simplified and fixed implementation for calling finalizers when
|
- simplified and fixed implementation for calling finalizers when
|
||||||
parametrized fixtures or function arguments are involved. finalization
|
parametrized fixtures or function arguments are involved. finalization
|
||||||
is now performed lazily at setup time instead of in the "teardown phase".
|
is now performed lazily at setup time instead of in the "teardown phase".
|
||||||
While this might sound odd at first, it helps to ensure that we are
|
While this might sound odd at first, it helps to ensure that we are
|
||||||
correctly handling setup/teardown even in complex code. User-level code
|
correctly handling setup/teardown even in complex code. User-level code
|
||||||
should not be affected unless it's implementing the pytest_runtest_teardown
|
should not be affected unless it's implementing the pytest_runtest_teardown
|
||||||
hook and expecting certain fixture instances are torn down within (very
|
hook and expecting certain fixture instances are torn down within (very
|
||||||
unlikely and would have been unreliable anyway).
|
unlikely and would have been unreliable anyway).
|
||||||
|
|
||||||
- PR90: add --color=yes|no|auto option to force terminal coloring
|
- PR90: add --color=yes|no|auto option to force terminal coloring
|
||||||
mode ("auto" is default). Thanks Marc Abramowitz.
|
mode ("auto" is default). Thanks Marc Abramowitz.
|
||||||
|
|
||||||
- fix issue319 - correctly show unicode in assertion errors. Many
|
- fix issue319 - correctly show unicode in assertion errors. Many
|
||||||
thanks to Floris Bruynooghe for the complete PR. Also means
|
thanks to Floris Bruynooghe for the complete PR. Also means
|
||||||
we depend on py>=1.4.19 now.
|
we depend on py>=1.4.19 now.
|
||||||
|
|
||||||
- fix issue396 - correctly sort and finalize class-scoped parametrized
|
- fix issue396 - correctly sort and finalize class-scoped parametrized
|
||||||
tests independently from number of methods on the class.
|
tests independently from number of methods on the class.
|
||||||
|
|
||||||
- refix issue323 in a better way -- parametrization should now never
|
- refix issue323 in a better way -- parametrization should now never
|
||||||
cause Runtime Recursion errors because the underlying algorithm
|
cause Runtime Recursion errors because the underlying algorithm
|
||||||
|
@ -70,18 +70,18 @@ holger krekel
|
||||||
to problems for more than >966 non-function scoped parameters).
|
to problems for more than >966 non-function scoped parameters).
|
||||||
|
|
||||||
- fix issue290 - there is preliminary support now for parametrizing
|
- fix issue290 - there is preliminary support now for parametrizing
|
||||||
with repeated same values (sometimes useful to test if calling
|
with repeated same values (sometimes useful to test if calling
|
||||||
a second time works as with the first time).
|
a second time works as with the first time).
|
||||||
|
|
||||||
- close issue240 - document precisely how pytest module importing
|
- close issue240 - document precisely how pytest module importing
|
||||||
works, discuss the two common test directory layouts, and how it
|
works, discuss the two common test directory layouts, and how it
|
||||||
interacts with PEP420-namespace packages.
|
interacts with PEP420-namespace packages.
|
||||||
|
|
||||||
- fix issue246 fix finalizer order to be LIFO on independent fixtures
|
- fix issue246 fix finalizer order to be LIFO on independent fixtures
|
||||||
depending on a parametrized higher-than-function scoped fixture.
|
depending on a parametrized higher-than-function scoped fixture.
|
||||||
(was quite some effort so please bear with the complexity of this sentence :)
|
(was quite some effort so please bear with the complexity of this sentence :)
|
||||||
Thanks Ralph Schmitt for the precise failure example.
|
Thanks Ralph Schmitt for the precise failure example.
|
||||||
|
|
||||||
- fix issue244 by implementing special index for parameters to only use
|
- fix issue244 by implementing special index for parameters to only use
|
||||||
indices for paramentrized test ids
|
indices for paramentrized test ids
|
||||||
|
|
||||||
|
@ -99,9 +99,9 @@ holger krekel
|
||||||
filtering with simple strings that are not valid python expressions.
|
filtering with simple strings that are not valid python expressions.
|
||||||
Examples: "-k 1.3" matches all tests parametrized with 1.3.
|
Examples: "-k 1.3" matches all tests parametrized with 1.3.
|
||||||
"-k None" filters all tests that have "None" in their name
|
"-k None" filters all tests that have "None" in their name
|
||||||
and conversely "-k 'not None'".
|
and conversely "-k 'not None'".
|
||||||
Previously these examples would raise syntax errors.
|
Previously these examples would raise syntax errors.
|
||||||
|
|
||||||
- fix issue384 by removing the trial support code
|
- fix issue384 by removing the trial support code
|
||||||
since the unittest compat enhancements allow
|
since the unittest compat enhancements allow
|
||||||
trial to handle it on its own
|
trial to handle it on its own
|
||||||
|
@ -109,7 +109,7 @@ holger krekel
|
||||||
- don't hide an ImportError when importing a plugin produces one.
|
- don't hide an ImportError when importing a plugin produces one.
|
||||||
fixes issue375.
|
fixes issue375.
|
||||||
|
|
||||||
- fix issue275 - allow usefixtures and autouse fixtures
|
- fix issue275 - allow usefixtures and autouse fixtures
|
||||||
for running doctest text files.
|
for running doctest text files.
|
||||||
|
|
||||||
- fix issue380 by making --resultlog only rely on longrepr instead
|
- fix issue380 by making --resultlog only rely on longrepr instead
|
||||||
|
@ -135,20 +135,20 @@ holger krekel
|
||||||
(it already did neutralize pytest.mark.xfail markers)
|
(it already did neutralize pytest.mark.xfail markers)
|
||||||
|
|
||||||
- refine pytest / pkg_resources interactions: The AssertionRewritingHook
|
- refine pytest / pkg_resources interactions: The AssertionRewritingHook
|
||||||
PEP302 compliant loader now registers itself with setuptools/pkg_resources
|
PEP302 compliant loader now registers itself with setuptools/pkg_resources
|
||||||
properly so that the pkg_resources.resource_stream method works properly.
|
properly so that the pkg_resources.resource_stream method works properly.
|
||||||
Fixes issue366. Thanks for the investigations and full PR to Jason R. Coombs.
|
Fixes issue366. Thanks for the investigations and full PR to Jason R. Coombs.
|
||||||
|
|
||||||
- pytestconfig fixture is now session-scoped as it is the same object during the
|
- pytestconfig fixture is now session-scoped as it is the same object during the
|
||||||
whole test run. Fixes issue370.
|
whole test run. Fixes issue370.
|
||||||
|
|
||||||
- avoid one surprising case of marker malfunction/confusion::
|
- avoid one surprising case of marker malfunction/confusion::
|
||||||
|
|
||||||
@pytest.mark.some(lambda arg: ...)
|
@pytest.mark.some(lambda arg: ...)
|
||||||
def test_function():
|
def test_function():
|
||||||
|
|
||||||
would not work correctly because pytest assumes @pytest.mark.some
|
would not work correctly because pytest assumes @pytest.mark.some
|
||||||
gets a function to be decorated already. We now at least detect if this
|
gets a function to be decorated already. We now at least detect if this
|
||||||
arg is a lambda and thus the example will work. Thanks Alex Gaynor
|
arg is a lambda and thus the example will work. Thanks Alex Gaynor
|
||||||
for bringing it up.
|
for bringing it up.
|
||||||
|
|
||||||
|
@ -159,11 +159,11 @@ holger krekel
|
||||||
although it's not needed by pytest itself atm. Also
|
although it's not needed by pytest itself atm. Also
|
||||||
fix caching. Fixes issue376.
|
fix caching. Fixes issue376.
|
||||||
|
|
||||||
- fix issue221 - handle importing of namespace-package with no
|
- fix issue221 - handle importing of namespace-package with no
|
||||||
__init__.py properly.
|
__init__.py properly.
|
||||||
|
|
||||||
- refactor internal FixtureRequest handling to avoid monkeypatching.
|
- refactor internal FixtureRequest handling to avoid monkeypatching.
|
||||||
One of the positive user-facing effects is that the "request" object
|
One of the positive user-facing effects is that the "request" object
|
||||||
can now be used in closures.
|
can now be used in closures.
|
||||||
|
|
||||||
- fixed version comparison in pytest.importskip(modname, minverstring)
|
- fixed version comparison in pytest.importskip(modname, minverstring)
|
||||||
|
|
|
@ -1,8 +1,8 @@
|
||||||
pytest-2.5.1: fixes and new home page styling
|
pytest-2.5.1: fixes and new home page styling
|
||||||
===========================================================================
|
===========================================================================
|
||||||
|
|
||||||
pytest is a mature Python testing tool with more than a 1000 tests
|
pytest is a mature Python testing tool with more than a 1000 tests
|
||||||
against itself, passing on many different interpreters and platforms.
|
against itself, passing on many different interpreters and platforms.
|
||||||
|
|
||||||
The 2.5.1 release maintains the "zero-reported-bugs" promise by fixing
|
The 2.5.1 release maintains the "zero-reported-bugs" promise by fixing
|
||||||
the three bugs reported since the last release a few days ago. It also
|
the three bugs reported since the last release a few days ago. It also
|
||||||
|
@ -11,12 +11,12 @@ the flask theme from Armin Ronacher:
|
||||||
|
|
||||||
http://pytest.org
|
http://pytest.org
|
||||||
|
|
||||||
If you have anything more to improve styling and docs,
|
If you have anything more to improve styling and docs,
|
||||||
we'd be very happy to merge further pull requests.
|
we'd be very happy to merge further pull requests.
|
||||||
|
|
||||||
On the coding side, the release also contains a little enhancement to
|
On the coding side, the release also contains a little enhancement to
|
||||||
fixture decorators allowing to directly influence generation of test
|
fixture decorators allowing to directly influence generation of test
|
||||||
ids, thanks to Floris Bruynooghe. Other thanks for helping with
|
ids, thanks to Floris Bruynooghe. Other thanks for helping with
|
||||||
this release go to Anatoly Bubenkoff and Ronny Pfannschmidt.
|
this release go to Anatoly Bubenkoff and Ronny Pfannschmidt.
|
||||||
|
|
||||||
As usual, you can upgrade from pypi via::
|
As usual, you can upgrade from pypi via::
|
||||||
|
@ -37,7 +37,7 @@ holger krekel
|
||||||
|
|
||||||
- Allow parameterized fixtures to specify the ID of the parameters by
|
- Allow parameterized fixtures to specify the ID of the parameters by
|
||||||
adding an ids argument to pytest.fixture() and pytest.yield_fixture().
|
adding an ids argument to pytest.fixture() and pytest.yield_fixture().
|
||||||
Thanks Floris Bruynooghe.
|
Thanks Floris Bruynooghe.
|
||||||
|
|
||||||
- fix issue404 by always using the binary xml escape in the junitxml
|
- fix issue404 by always using the binary xml escape in the junitxml
|
||||||
plugin. Thanks Ronny Pfannschmidt.
|
plugin. Thanks Ronny Pfannschmidt.
|
||||||
|
|
|
@ -1,8 +1,8 @@
|
||||||
pytest-2.5.2: fixes
|
pytest-2.5.2: fixes
|
||||||
===========================================================================
|
===========================================================================
|
||||||
|
|
||||||
pytest is a mature Python testing tool with more than a 1000 tests
|
pytest is a mature Python testing tool with more than a 1000 tests
|
||||||
against itself, passing on many different interpreters and platforms.
|
against itself, passing on many different interpreters and platforms.
|
||||||
|
|
||||||
The 2.5.2 release fixes a few bugs with two maybe-bugs remaining and
|
The 2.5.2 release fixes a few bugs with two maybe-bugs remaining and
|
||||||
actively being worked on (and waiting for the bug reporter's input).
|
actively being worked on (and waiting for the bug reporter's input).
|
||||||
|
@ -19,18 +19,18 @@ As usual, you can upgrade from pypi via::
|
||||||
|
|
||||||
Thanks to the following people who contributed to this release:
|
Thanks to the following people who contributed to this release:
|
||||||
|
|
||||||
Anatoly Bubenkov
|
Anatoly Bubenkov
|
||||||
Ronny Pfannschmidt
|
Ronny Pfannschmidt
|
||||||
Floris Bruynooghe
|
Floris Bruynooghe
|
||||||
Bruno Oliveira
|
Bruno Oliveira
|
||||||
Andreas Pelme
|
Andreas Pelme
|
||||||
Jurko Gospodnetić
|
Jurko Gospodnetić
|
||||||
Piotr Banaszkiewicz
|
Piotr Banaszkiewicz
|
||||||
Simon Liedtke
|
Simon Liedtke
|
||||||
lakka
|
lakka
|
||||||
Lukasz Balcerzak
|
Lukasz Balcerzak
|
||||||
Philippe Muller
|
Philippe Muller
|
||||||
Daniel Hahler
|
Daniel Hahler
|
||||||
|
|
||||||
have fun,
|
have fun,
|
||||||
holger krekel
|
holger krekel
|
||||||
|
@ -39,11 +39,11 @@ holger krekel
|
||||||
-----------------------------------
|
-----------------------------------
|
||||||
|
|
||||||
- fix issue409 -- better interoperate with cx_freeze by not
|
- fix issue409 -- better interoperate with cx_freeze by not
|
||||||
trying to import from collections.abc which causes problems
|
trying to import from collections.abc which causes problems
|
||||||
for py27/cx_freeze. Thanks Wolfgang L. for reporting and tracking it down.
|
for py27/cx_freeze. Thanks Wolfgang L. for reporting and tracking it down.
|
||||||
|
|
||||||
- fixed docs and code to use "pytest" instead of "py.test" almost everywhere.
|
- fixed docs and code to use "pytest" instead of "py.test" almost everywhere.
|
||||||
Thanks Jurko Gospodnetic for the complete PR.
|
Thanks Jurko Gospodnetic for the complete PR.
|
||||||
|
|
||||||
- fix issue425: mention at end of "py.test -h" that --markers
|
- fix issue425: mention at end of "py.test -h" that --markers
|
||||||
and --fixtures work according to specified test path (or current dir)
|
and --fixtures work according to specified test path (or current dir)
|
||||||
|
@ -54,7 +54,7 @@ holger krekel
|
||||||
|
|
||||||
- copy, cleanup and integrate py.io capture
|
- copy, cleanup and integrate py.io capture
|
||||||
from pylib 1.4.20.dev2 (rev 13d9af95547e)
|
from pylib 1.4.20.dev2 (rev 13d9af95547e)
|
||||||
|
|
||||||
- address issue416: clarify docs as to conftest.py loading semantics
|
- address issue416: clarify docs as to conftest.py loading semantics
|
||||||
|
|
||||||
- fix issue429: comparing byte strings with non-ascii chars in assert
|
- fix issue429: comparing byte strings with non-ascii chars in assert
|
||||||
|
|
|
@ -53,6 +53,6 @@ The py.test Development Team
|
||||||
Thanks Gabriel Reis for the PR.
|
Thanks Gabriel Reis for the PR.
|
||||||
|
|
||||||
- add more talks to the documentation
|
- add more talks to the documentation
|
||||||
- extend documentation on the --ignore cli option
|
- extend documentation on the --ignore cli option
|
||||||
- use pytest-runner for setuptools integration
|
- use pytest-runner for setuptools integration
|
||||||
- minor fixes for interaction with OS X El Capitan system integrity protection (thanks Florian)
|
- minor fixes for interaction with OS X El Capitan system integrity protection (thanks Florian)
|
||||||
|
|
|
@ -14,25 +14,25 @@ As usual, you can upgrade from pypi via::
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
||||||
Anatoly Bubenkov
|
Anatoly Bubenkov
|
||||||
Bruno Oliveira
|
Bruno Oliveira
|
||||||
Buck Golemon
|
Buck Golemon
|
||||||
David Vierra
|
David Vierra
|
||||||
Florian Bruhin
|
Florian Bruhin
|
||||||
Galaczi Endre
|
Galaczi Endre
|
||||||
Georgy Dyuldin
|
Georgy Dyuldin
|
||||||
Lukas Bednar
|
Lukas Bednar
|
||||||
Luke Murphy
|
Luke Murphy
|
||||||
Marcin Biernat
|
Marcin Biernat
|
||||||
Matt Williams
|
Matt Williams
|
||||||
Michael Aquilina
|
Michael Aquilina
|
||||||
Raphael Pierzina
|
Raphael Pierzina
|
||||||
Ronny Pfannschmidt
|
Ronny Pfannschmidt
|
||||||
Ryan Wooden
|
Ryan Wooden
|
||||||
Tiemo Kieft
|
Tiemo Kieft
|
||||||
TomV
|
TomV
|
||||||
holger krekel
|
holger krekel
|
||||||
jab
|
jab
|
||||||
|
|
||||||
|
|
||||||
Happy testing,
|
Happy testing,
|
||||||
|
@ -76,18 +76,18 @@ The py.test Development Team
|
||||||
**Changes**
|
**Changes**
|
||||||
|
|
||||||
* **Important**: `py.code <https://pylib.readthedocs.io/en/latest/code.html>`_ has been
|
* **Important**: `py.code <https://pylib.readthedocs.io/en/latest/code.html>`_ has been
|
||||||
merged into the ``pytest`` repository as ``pytest._code``. This decision
|
merged into the ``pytest`` repository as ``pytest._code``. This decision
|
||||||
was made because ``py.code`` had very few uses outside ``pytest`` and the
|
was made because ``py.code`` had very few uses outside ``pytest`` and the
|
||||||
fact that it was in a different repository made it difficult to fix bugs on
|
fact that it was in a different repository made it difficult to fix bugs on
|
||||||
its code in a timely manner. The team hopes with this to be able to better
|
its code in a timely manner. The team hopes with this to be able to better
|
||||||
refactor out and improve that code.
|
refactor out and improve that code.
|
||||||
This change shouldn't affect users, but it is useful to let users aware
|
This change shouldn't affect users, but it is useful to let users aware
|
||||||
if they encounter any strange behavior.
|
if they encounter any strange behavior.
|
||||||
|
|
||||||
Keep in mind that the code for ``pytest._code`` is **private** and
|
Keep in mind that the code for ``pytest._code`` is **private** and
|
||||||
**experimental**, so you definitely should not import it explicitly!
|
**experimental**, so you definitely should not import it explicitly!
|
||||||
|
|
||||||
Please note that the original ``py.code`` is still available in
|
Please note that the original ``py.code`` is still available in
|
||||||
`pylib <https://pylib.readthedocs.io>`_.
|
`pylib <https://pylib.readthedocs.io>`_.
|
||||||
|
|
||||||
* ``pytest_enter_pdb`` now optionally receives the pytest config object.
|
* ``pytest_enter_pdb`` now optionally receives the pytest config object.
|
||||||
|
@ -129,8 +129,8 @@ The py.test Development Team
|
||||||
|
|
||||||
* Fix (`#1422`_): junit record_xml_property doesn't allow multiple records
|
* Fix (`#1422`_): junit record_xml_property doesn't allow multiple records
|
||||||
with same name.
|
with same name.
|
||||||
|
|
||||||
|
|
||||||
.. _`traceback style docs`: https://pytest.org/latest/usage.html#modifying-python-traceback-printing
|
.. _`traceback style docs`: https://pytest.org/latest/usage.html#modifying-python-traceback-printing
|
||||||
|
|
||||||
.. _#1422: https://github.com/pytest-dev/pytest/issues/1422
|
.. _#1422: https://github.com/pytest-dev/pytest/issues/1422
|
||||||
|
|
|
@ -14,17 +14,17 @@ As usual, you can upgrade from pypi via::
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
||||||
Bruno Oliveira
|
Bruno Oliveira
|
||||||
Daniel Hahler
|
Daniel Hahler
|
||||||
Dmitry Malinovsky
|
Dmitry Malinovsky
|
||||||
Florian Bruhin
|
Florian Bruhin
|
||||||
Floris Bruynooghe
|
Floris Bruynooghe
|
||||||
Matt Bachmann
|
Matt Bachmann
|
||||||
Ronny Pfannschmidt
|
Ronny Pfannschmidt
|
||||||
TomV
|
TomV
|
||||||
Vladimir Bolshakov
|
Vladimir Bolshakov
|
||||||
Zearin
|
Zearin
|
||||||
palaviv
|
palaviv
|
||||||
|
|
||||||
|
|
||||||
Happy testing,
|
Happy testing,
|
||||||
|
|
|
@ -8,10 +8,10 @@ against itself, passing on many different interpreters and platforms.
|
||||||
|
|
||||||
This release contains a lot of bugs fixes and improvements, and much of
|
This release contains a lot of bugs fixes and improvements, and much of
|
||||||
the work done on it was possible because of the 2016 Sprint[1], which
|
the work done on it was possible because of the 2016 Sprint[1], which
|
||||||
was funded by an indiegogo campaign which raised over US$12,000 with
|
was funded by an indiegogo campaign which raised over US$12,000 with
|
||||||
nearly 100 backers.
|
nearly 100 backers.
|
||||||
|
|
||||||
There's a "What's new in pytest 3.0" [2] blog post highlighting the
|
There's a "What's new in pytest 3.0" [2] blog post highlighting the
|
||||||
major features in this release.
|
major features in this release.
|
||||||
|
|
||||||
To see the complete changelog and documentation, please visit:
|
To see the complete changelog and documentation, please visit:
|
||||||
|
|
|
@ -7,7 +7,7 @@ This release fixes some regressions reported in version 3.0.0, being a
|
||||||
drop-in replacement. To upgrade:
|
drop-in replacement. To upgrade:
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -7,7 +7,7 @@ This release fixes some regressions and bugs reported in version 3.0.1, being a
|
||||||
drop-in replacement. To upgrade::
|
drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -3,11 +3,11 @@ pytest-3.0.3
|
||||||
|
|
||||||
pytest 3.0.3 has just been released to PyPI.
|
pytest 3.0.3 has just been released to PyPI.
|
||||||
|
|
||||||
This release fixes some regressions and bugs reported in the last version,
|
This release fixes some regressions and bugs reported in the last version,
|
||||||
being a drop-in replacement. To upgrade::
|
being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -3,11 +3,11 @@ pytest-3.0.4
|
||||||
|
|
||||||
pytest 3.0.4 has just been released to PyPI.
|
pytest 3.0.4 has just been released to PyPI.
|
||||||
|
|
||||||
This release fixes some regressions and bugs reported in the last version,
|
This release fixes some regressions and bugs reported in the last version,
|
||||||
being a drop-in replacement. To upgrade::
|
being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest 3.0.5 has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest 3.0.6 has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest 3.0.7 has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest 3.1.1 has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest 3.1.2 has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest 3.1.3 has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest 3.2.1 has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest 3.2.2 has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest 3.2.3 has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest 3.2.4 has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest 3.2.5 has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest 3.3.1 has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest 3.3.2 has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest 3.4.1 has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest 3.4.2 has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest 3.5.1 has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
|
@ -29,17 +29,17 @@ you will see the return value of the function call::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_assert1.py F [100%]
|
test_assert1.py F [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
______________________________ test_function _______________________________
|
______________________________ test_function _______________________________
|
||||||
|
|
||||||
def test_function():
|
def test_function():
|
||||||
> assert f() == 4
|
> assert f() == 4
|
||||||
E assert 3 == 4
|
E assert 3 == 4
|
||||||
E + where 3 = f()
|
E + where 3 = f()
|
||||||
|
|
||||||
test_assert1.py:5: AssertionError
|
test_assert1.py:5: AssertionError
|
||||||
========================= 1 failed in 0.12 seconds =========================
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
|
@ -172,12 +172,12 @@ if you run this module::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_assert2.py F [100%]
|
test_assert2.py F [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
___________________________ test_set_comparison ____________________________
|
___________________________ test_set_comparison ____________________________
|
||||||
|
|
||||||
def test_set_comparison():
|
def test_set_comparison():
|
||||||
set1 = set("1308")
|
set1 = set("1308")
|
||||||
set2 = set("8035")
|
set2 = set("8035")
|
||||||
|
@ -188,7 +188,7 @@ if you run this module::
|
||||||
E Extra items in the right set:
|
E Extra items in the right set:
|
||||||
E '5'
|
E '5'
|
||||||
E Use -v to get the full diff
|
E Use -v to get the full diff
|
||||||
|
|
||||||
test_assert2.py:5: AssertionError
|
test_assert2.py:5: AssertionError
|
||||||
========================= 1 failed in 0.12 seconds =========================
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
|
@ -209,7 +209,7 @@ the ``pytest_assertrepr_compare`` hook.
|
||||||
.. autofunction:: _pytest.hookspec.pytest_assertrepr_compare
|
.. autofunction:: _pytest.hookspec.pytest_assertrepr_compare
|
||||||
:noindex:
|
:noindex:
|
||||||
|
|
||||||
As an example consider adding the following hook in a :ref:`conftest.py <conftest.py>`
|
As an example consider adding the following hook in a :ref:`conftest.py <conftest.py>`
|
||||||
file which provides an alternative explanation for ``Foo`` objects::
|
file which provides an alternative explanation for ``Foo`` objects::
|
||||||
|
|
||||||
# content of conftest.py
|
# content of conftest.py
|
||||||
|
@ -241,14 +241,14 @@ the conftest file::
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_compare _______________________________
|
_______________________________ test_compare _______________________________
|
||||||
|
|
||||||
def test_compare():
|
def test_compare():
|
||||||
f1 = Foo(1)
|
f1 = Foo(1)
|
||||||
f2 = Foo(2)
|
f2 = Foo(2)
|
||||||
> assert f1 == f2
|
> assert f1 == f2
|
||||||
E assert Comparing Foo instances:
|
E assert Comparing Foo instances:
|
||||||
E vals: 1 != 2
|
E vals: 1 != 2
|
||||||
|
|
||||||
test_foocompare.py:11: AssertionError
|
test_foocompare.py:11: AssertionError
|
||||||
1 failed in 0.12 seconds
|
1 failed in 0.12 seconds
|
||||||
|
|
||||||
|
|
|
@ -17,13 +17,13 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
|
||||||
$ pytest -q --fixtures
|
$ pytest -q --fixtures
|
||||||
cache
|
cache
|
||||||
Return a cache object that can persist state between testing sessions.
|
Return a cache object that can persist state between testing sessions.
|
||||||
|
|
||||||
cache.get(key, default)
|
cache.get(key, default)
|
||||||
cache.set(key, value)
|
cache.set(key, value)
|
||||||
|
|
||||||
Keys must be a ``/`` separated value, where the first part is usually the
|
Keys must be a ``/`` separated value, where the first part is usually the
|
||||||
name of your plugin or application to avoid clashes with other cache users.
|
name of your plugin or application to avoid clashes with other cache users.
|
||||||
|
|
||||||
Values can be any object handled by the json stdlib module.
|
Values can be any object handled by the json stdlib module.
|
||||||
capsys
|
capsys
|
||||||
Enable capturing of writes to ``sys.stdout`` and ``sys.stderr`` and make
|
Enable capturing of writes to ``sys.stdout`` and ``sys.stderr`` and make
|
||||||
|
@ -49,9 +49,9 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
|
||||||
Fixture that returns a :py:class:`dict` that will be injected into the namespace of doctests.
|
Fixture that returns a :py:class:`dict` that will be injected into the namespace of doctests.
|
||||||
pytestconfig
|
pytestconfig
|
||||||
Session-scoped fixture that returns the :class:`_pytest.config.Config` object.
|
Session-scoped fixture that returns the :class:`_pytest.config.Config` object.
|
||||||
|
|
||||||
Example::
|
Example::
|
||||||
|
|
||||||
def test_foo(pytestconfig):
|
def test_foo(pytestconfig):
|
||||||
if pytestconfig.getoption("verbose"):
|
if pytestconfig.getoption("verbose"):
|
||||||
...
|
...
|
||||||
|
@ -61,9 +61,9 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
|
||||||
configured reporters, like JUnit XML.
|
configured reporters, like JUnit XML.
|
||||||
The fixture is callable with ``(name, value)``, with value being automatically
|
The fixture is callable with ``(name, value)``, with value being automatically
|
||||||
xml-encoded.
|
xml-encoded.
|
||||||
|
|
||||||
Example::
|
Example::
|
||||||
|
|
||||||
def test_function(record_property):
|
def test_function(record_property):
|
||||||
record_property("example_key", 1)
|
record_property("example_key", 1)
|
||||||
record_xml_property
|
record_xml_property
|
||||||
|
@ -74,9 +74,9 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
|
||||||
automatically xml-encoded
|
automatically xml-encoded
|
||||||
caplog
|
caplog
|
||||||
Access and control log capturing.
|
Access and control log capturing.
|
||||||
|
|
||||||
Captured logs are available through the following methods::
|
Captured logs are available through the following methods::
|
||||||
|
|
||||||
* caplog.text -> string containing formatted log output
|
* caplog.text -> string containing formatted log output
|
||||||
* caplog.records -> list of logging.LogRecord instances
|
* caplog.records -> list of logging.LogRecord instances
|
||||||
* caplog.record_tuples -> list of (logger_name, level, message) tuples
|
* caplog.record_tuples -> list of (logger_name, level, message) tuples
|
||||||
|
@ -84,7 +84,7 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
|
||||||
monkeypatch
|
monkeypatch
|
||||||
The returned ``monkeypatch`` fixture provides these
|
The returned ``monkeypatch`` fixture provides these
|
||||||
helper methods to modify objects, dictionaries or os.environ::
|
helper methods to modify objects, dictionaries or os.environ::
|
||||||
|
|
||||||
monkeypatch.setattr(obj, name, value, raising=True)
|
monkeypatch.setattr(obj, name, value, raising=True)
|
||||||
monkeypatch.delattr(obj, name, raising=True)
|
monkeypatch.delattr(obj, name, raising=True)
|
||||||
monkeypatch.setitem(mapping, name, value)
|
monkeypatch.setitem(mapping, name, value)
|
||||||
|
@ -93,14 +93,14 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
|
||||||
monkeypatch.delenv(name, value, raising=True)
|
monkeypatch.delenv(name, value, raising=True)
|
||||||
monkeypatch.syspath_prepend(path)
|
monkeypatch.syspath_prepend(path)
|
||||||
monkeypatch.chdir(path)
|
monkeypatch.chdir(path)
|
||||||
|
|
||||||
All modifications will be undone after the requesting
|
All modifications will be undone after the requesting
|
||||||
test function or fixture has finished. The ``raising``
|
test function or fixture has finished. The ``raising``
|
||||||
parameter determines if a KeyError or AttributeError
|
parameter determines if a KeyError or AttributeError
|
||||||
will be raised if the set/deletion operation has no target.
|
will be raised if the set/deletion operation has no target.
|
||||||
recwarn
|
recwarn
|
||||||
Return a :class:`WarningsRecorder` instance that records all warnings emitted by test functions.
|
Return a :class:`WarningsRecorder` instance that records all warnings emitted by test functions.
|
||||||
|
|
||||||
See http://docs.python.org/library/warnings.html for information
|
See http://docs.python.org/library/warnings.html for information
|
||||||
on warning categories.
|
on warning categories.
|
||||||
tmpdir_factory
|
tmpdir_factory
|
||||||
|
@ -111,9 +111,9 @@ For information about fixtures, see :ref:`fixtures`. To see a complete list of a
|
||||||
created as a sub directory of the base temporary
|
created as a sub directory of the base temporary
|
||||||
directory. The returned object is a `py.path.local`_
|
directory. The returned object is a `py.path.local`_
|
||||||
path object.
|
path object.
|
||||||
|
|
||||||
.. _`py.path.local`: https://py.readthedocs.io/en/latest/path.html
|
.. _`py.path.local`: https://py.readthedocs.io/en/latest/path.html
|
||||||
|
|
||||||
no tests ran in 0.12 seconds
|
no tests ran in 0.12 seconds
|
||||||
|
|
||||||
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like::
|
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like::
|
||||||
|
|
|
@ -20,7 +20,7 @@ last ``pytest`` invocation:
|
||||||
For cleanup (usually not needed), a ``--cache-clear`` option allows to remove
|
For cleanup (usually not needed), a ``--cache-clear`` option allows to remove
|
||||||
all cross-session cache contents ahead of a test run.
|
all cross-session cache contents ahead of a test run.
|
||||||
|
|
||||||
Other plugins may access the `config.cache`_ object to set/get
|
Other plugins may access the `config.cache`_ object to set/get
|
||||||
**json encodable** values between ``pytest`` invocations.
|
**json encodable** values between ``pytest`` invocations.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
@ -49,26 +49,26 @@ If you run this for the first time you will see two failures::
|
||||||
.................F.......F........................ [100%]
|
.................F.......F........................ [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_num[17] _______________________________
|
_______________________________ test_num[17] _______________________________
|
||||||
|
|
||||||
i = 17
|
i = 17
|
||||||
|
|
||||||
@pytest.mark.parametrize("i", range(50))
|
@pytest.mark.parametrize("i", range(50))
|
||||||
def test_num(i):
|
def test_num(i):
|
||||||
if i in (17, 25):
|
if i in (17, 25):
|
||||||
> pytest.fail("bad luck")
|
> pytest.fail("bad luck")
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
_______________________________ test_num[25] _______________________________
|
_______________________________ test_num[25] _______________________________
|
||||||
|
|
||||||
i = 25
|
i = 25
|
||||||
|
|
||||||
@pytest.mark.parametrize("i", range(50))
|
@pytest.mark.parametrize("i", range(50))
|
||||||
def test_num(i):
|
def test_num(i):
|
||||||
if i in (17, 25):
|
if i in (17, 25):
|
||||||
> pytest.fail("bad luck")
|
> pytest.fail("bad luck")
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
2 failed, 48 passed in 0.12 seconds
|
2 failed, 48 passed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -80,31 +80,31 @@ If you then run it with ``--lf``::
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 50 items / 48 deselected
|
collected 50 items / 48 deselected
|
||||||
run-last-failure: rerun previous 2 failures
|
run-last-failure: rerun previous 2 failures
|
||||||
|
|
||||||
test_50.py FF [100%]
|
test_50.py FF [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_num[17] _______________________________
|
_______________________________ test_num[17] _______________________________
|
||||||
|
|
||||||
i = 17
|
i = 17
|
||||||
|
|
||||||
@pytest.mark.parametrize("i", range(50))
|
@pytest.mark.parametrize("i", range(50))
|
||||||
def test_num(i):
|
def test_num(i):
|
||||||
if i in (17, 25):
|
if i in (17, 25):
|
||||||
> pytest.fail("bad luck")
|
> pytest.fail("bad luck")
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
_______________________________ test_num[25] _______________________________
|
_______________________________ test_num[25] _______________________________
|
||||||
|
|
||||||
i = 25
|
i = 25
|
||||||
|
|
||||||
@pytest.mark.parametrize("i", range(50))
|
@pytest.mark.parametrize("i", range(50))
|
||||||
def test_num(i):
|
def test_num(i):
|
||||||
if i in (17, 25):
|
if i in (17, 25):
|
||||||
> pytest.fail("bad luck")
|
> pytest.fail("bad luck")
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
================= 2 failed, 48 deselected in 0.12 seconds ==================
|
================= 2 failed, 48 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
|
@ -121,31 +121,31 @@ of ``FF`` and dots)::
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 50 items
|
collected 50 items
|
||||||
run-last-failure: rerun previous 2 failures first
|
run-last-failure: rerun previous 2 failures first
|
||||||
|
|
||||||
test_50.py FF................................................ [100%]
|
test_50.py FF................................................ [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_num[17] _______________________________
|
_______________________________ test_num[17] _______________________________
|
||||||
|
|
||||||
i = 17
|
i = 17
|
||||||
|
|
||||||
@pytest.mark.parametrize("i", range(50))
|
@pytest.mark.parametrize("i", range(50))
|
||||||
def test_num(i):
|
def test_num(i):
|
||||||
if i in (17, 25):
|
if i in (17, 25):
|
||||||
> pytest.fail("bad luck")
|
> pytest.fail("bad luck")
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
_______________________________ test_num[25] _______________________________
|
_______________________________ test_num[25] _______________________________
|
||||||
|
|
||||||
i = 25
|
i = 25
|
||||||
|
|
||||||
@pytest.mark.parametrize("i", range(50))
|
@pytest.mark.parametrize("i", range(50))
|
||||||
def test_num(i):
|
def test_num(i):
|
||||||
if i in (17, 25):
|
if i in (17, 25):
|
||||||
> pytest.fail("bad luck")
|
> pytest.fail("bad luck")
|
||||||
E Failed: bad luck
|
E Failed: bad luck
|
||||||
|
|
||||||
test_50.py:6: Failed
|
test_50.py:6: Failed
|
||||||
=================== 2 failed, 48 passed in 0.12 seconds ====================
|
=================== 2 failed, 48 passed in 0.12 seconds ====================
|
||||||
|
|
||||||
|
@ -198,13 +198,13 @@ of the sleep::
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
______________________________ test_function _______________________________
|
______________________________ test_function _______________________________
|
||||||
|
|
||||||
mydata = 42
|
mydata = 42
|
||||||
|
|
||||||
def test_function(mydata):
|
def test_function(mydata):
|
||||||
> assert mydata == 23
|
> assert mydata == 23
|
||||||
E assert 42 == 23
|
E assert 42 == 23
|
||||||
|
|
||||||
test_caching.py:14: AssertionError
|
test_caching.py:14: AssertionError
|
||||||
1 failed in 0.12 seconds
|
1 failed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -215,13 +215,13 @@ the cache and this will be quick::
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
______________________________ test_function _______________________________
|
______________________________ test_function _______________________________
|
||||||
|
|
||||||
mydata = 42
|
mydata = 42
|
||||||
|
|
||||||
def test_function(mydata):
|
def test_function(mydata):
|
||||||
> assert mydata == 23
|
> assert mydata == 23
|
||||||
E assert 42 == 23
|
E assert 42 == 23
|
||||||
|
|
||||||
test_caching.py:14: AssertionError
|
test_caching.py:14: AssertionError
|
||||||
1 failed in 0.12 seconds
|
1 failed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -246,7 +246,7 @@ You can always peek at the content of the cache using the
|
||||||
['test_caching.py::test_function']
|
['test_caching.py::test_function']
|
||||||
example/value contains:
|
example/value contains:
|
||||||
42
|
42
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
Clearing Cache content
|
Clearing Cache content
|
||||||
|
|
|
@ -68,16 +68,16 @@ of the failing function and hide the other one::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py .F [100%]
|
test_module.py .F [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
________________________________ test_func2 ________________________________
|
________________________________ test_func2 ________________________________
|
||||||
|
|
||||||
def test_func2():
|
def test_func2():
|
||||||
> assert False
|
> assert False
|
||||||
E assert False
|
E assert False
|
||||||
|
|
||||||
test_module.py:9: AssertionError
|
test_module.py:9: AssertionError
|
||||||
-------------------------- Captured stdout setup ---------------------------
|
-------------------------- Captured stdout setup ---------------------------
|
||||||
setting up <function test_func2 at 0xdeadbeef>
|
setting up <function test_func2 at 0xdeadbeef>
|
||||||
|
|
|
@ -8,9 +8,9 @@ Contact channels
|
||||||
- `pytest issue tracker`_ to report bugs or suggest features (for version
|
- `pytest issue tracker`_ to report bugs or suggest features (for version
|
||||||
2.0 and above).
|
2.0 and above).
|
||||||
|
|
||||||
- `pytest on stackoverflow.com <http://stackoverflow.com/search?q=pytest>`_
|
- `pytest on stackoverflow.com <http://stackoverflow.com/search?q=pytest>`_
|
||||||
to post questions with the tag ``pytest``. New Questions will usually
|
to post questions with the tag ``pytest``. New Questions will usually
|
||||||
be seen by pytest users or developers and answered quickly.
|
be seen by pytest users or developers and answered quickly.
|
||||||
|
|
||||||
- `Testing In Python`_: a mailing list for Python testing tools and discussion.
|
- `Testing In Python`_: a mailing list for Python testing tools and discussion.
|
||||||
|
|
||||||
|
|
|
@ -38,7 +38,7 @@ Here's a summary what ``pytest`` uses ``rootdir`` for:
|
||||||
Important to emphasize that ``rootdir`` is **NOT** used to modify ``sys.path``/``PYTHONPATH`` or
|
Important to emphasize that ``rootdir`` is **NOT** used to modify ``sys.path``/``PYTHONPATH`` or
|
||||||
influence how modules are imported. See :ref:`pythonpath` for more details.
|
influence how modules are imported. See :ref:`pythonpath` for more details.
|
||||||
|
|
||||||
``--rootdir=path`` command-line option can be used to force a specific directory.
|
``--rootdir=path`` command-line option can be used to force a specific directory.
|
||||||
The directory passed may contain environment variables when it is used in conjunction
|
The directory passed may contain environment variables when it is used in conjunction
|
||||||
with ``addopts`` in a ``pytest.ini`` file.
|
with ``addopts`` in a ``pytest.ini`` file.
|
||||||
|
|
||||||
|
|
|
@ -40,7 +40,7 @@ avoid creating labels just for the sake of creating them.
|
||||||
Each label should include a description in the GitHub's interface stating its purpose.
|
Each label should include a description in the GitHub's interface stating its purpose.
|
||||||
|
|
||||||
Temporary labels
|
Temporary labels
|
||||||
~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
To classify issues for a special event it is encouraged to create a temporary label. This helps those involved to find
|
To classify issues for a special event it is encouraged to create a temporary label. This helps those involved to find
|
||||||
the relevant issues to work on. Examples of that are sprints in Python events or global hacking events.
|
the relevant issues to work on. Examples of that are sprints in Python events or global hacking events.
|
||||||
|
|
|
@ -65,9 +65,9 @@ then you can just invoke ``pytest`` without command line options::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
mymodule.py . [100%]
|
mymodule.py . [100%]
|
||||||
|
|
||||||
========================= 1 passed in 0.12 seconds =========================
|
========================= 1 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
It is possible to use fixtures using the ``getfixture`` helper::
|
It is possible to use fixtures using the ``getfixture`` helper::
|
||||||
|
|
|
@ -35,9 +35,9 @@ You can then restrict a test run to only run tests marked with ``webtest``::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items / 3 deselected
|
collecting ... collected 4 items / 3 deselected
|
||||||
|
|
||||||
test_server.py::test_send_http PASSED [100%]
|
test_server.py::test_send_http PASSED [100%]
|
||||||
|
|
||||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
Or the inverse, running all tests except the webtest ones::
|
Or the inverse, running all tests except the webtest ones::
|
||||||
|
@ -48,11 +48,11 @@ Or the inverse, running all tests except the webtest ones::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items / 1 deselected
|
collecting ... collected 4 items / 1 deselected
|
||||||
|
|
||||||
test_server.py::test_something_quick PASSED [ 33%]
|
test_server.py::test_something_quick PASSED [ 33%]
|
||||||
test_server.py::test_another PASSED [ 66%]
|
test_server.py::test_another PASSED [ 66%]
|
||||||
test_server.py::TestClass::test_method PASSED [100%]
|
test_server.py::TestClass::test_method PASSED [100%]
|
||||||
|
|
||||||
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
Selecting tests based on their node ID
|
Selecting tests based on their node ID
|
||||||
|
@ -68,9 +68,9 @@ tests based on their module, class, method, or function name::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 1 item
|
collecting ... collected 1 item
|
||||||
|
|
||||||
test_server.py::TestClass::test_method PASSED [100%]
|
test_server.py::TestClass::test_method PASSED [100%]
|
||||||
|
|
||||||
========================= 1 passed in 0.12 seconds =========================
|
========================= 1 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
You can also select on the class::
|
You can also select on the class::
|
||||||
|
@ -81,9 +81,9 @@ You can also select on the class::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 1 item
|
collecting ... collected 1 item
|
||||||
|
|
||||||
test_server.py::TestClass::test_method PASSED [100%]
|
test_server.py::TestClass::test_method PASSED [100%]
|
||||||
|
|
||||||
========================= 1 passed in 0.12 seconds =========================
|
========================= 1 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
Or select multiple nodes::
|
Or select multiple nodes::
|
||||||
|
@ -94,10 +94,10 @@ Or select multiple nodes::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
|
|
||||||
test_server.py::TestClass::test_method PASSED [ 50%]
|
test_server.py::TestClass::test_method PASSED [ 50%]
|
||||||
test_server.py::test_send_http PASSED [100%]
|
test_server.py::test_send_http PASSED [100%]
|
||||||
|
|
||||||
========================= 2 passed in 0.12 seconds =========================
|
========================= 2 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
.. _node-id:
|
.. _node-id:
|
||||||
|
@ -132,9 +132,9 @@ select tests based on their names::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items / 3 deselected
|
collecting ... collected 4 items / 3 deselected
|
||||||
|
|
||||||
test_server.py::test_send_http PASSED [100%]
|
test_server.py::test_send_http PASSED [100%]
|
||||||
|
|
||||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
And you can also run all tests except the ones that match the keyword::
|
And you can also run all tests except the ones that match the keyword::
|
||||||
|
@ -145,11 +145,11 @@ And you can also run all tests except the ones that match the keyword::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items / 1 deselected
|
collecting ... collected 4 items / 1 deselected
|
||||||
|
|
||||||
test_server.py::test_something_quick PASSED [ 33%]
|
test_server.py::test_something_quick PASSED [ 33%]
|
||||||
test_server.py::test_another PASSED [ 66%]
|
test_server.py::test_another PASSED [ 66%]
|
||||||
test_server.py::TestClass::test_method PASSED [100%]
|
test_server.py::TestClass::test_method PASSED [100%]
|
||||||
|
|
||||||
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
Or to select "http" and "quick" tests::
|
Or to select "http" and "quick" tests::
|
||||||
|
@ -160,10 +160,10 @@ Or to select "http" and "quick" tests::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 4 items / 2 deselected
|
collecting ... collected 4 items / 2 deselected
|
||||||
|
|
||||||
test_server.py::test_send_http PASSED [ 50%]
|
test_server.py::test_send_http PASSED [ 50%]
|
||||||
test_server.py::test_something_quick PASSED [100%]
|
test_server.py::test_something_quick PASSED [100%]
|
||||||
|
|
||||||
================== 2 passed, 2 deselected in 0.12 seconds ==================
|
================== 2 passed, 2 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
@ -199,21 +199,21 @@ You can ask which markers exist for your test suite - the list includes our just
|
||||||
|
|
||||||
$ pytest --markers
|
$ pytest --markers
|
||||||
@pytest.mark.webtest: mark a test as a webtest.
|
@pytest.mark.webtest: mark a test as a webtest.
|
||||||
|
|
||||||
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
|
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
|
||||||
|
|
||||||
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
|
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
|
||||||
|
|
||||||
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
|
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
|
||||||
|
|
||||||
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
|
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
|
||||||
|
|
||||||
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
|
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
|
||||||
|
|
||||||
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
|
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
|
||||||
|
|
||||||
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
|
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
|
||||||
|
|
||||||
|
|
||||||
For an example on how to add and work with markers from a plugin, see
|
For an example on how to add and work with markers from a plugin, see
|
||||||
:ref:`adding a custom marker from a plugin`.
|
:ref:`adding a custom marker from a plugin`.
|
||||||
|
@ -227,7 +227,7 @@ For an example on how to add and work with markers from a plugin, see
|
||||||
* Asking for existing markers via ``pytest --markers`` gives good output
|
* Asking for existing markers via ``pytest --markers`` gives good output
|
||||||
|
|
||||||
* Typos in function markers are treated as an error if you use
|
* Typos in function markers are treated as an error if you use
|
||||||
the ``--strict`` option.
|
the ``--strict`` option.
|
||||||
|
|
||||||
.. _`scoped-marking`:
|
.. _`scoped-marking`:
|
||||||
|
|
||||||
|
@ -352,9 +352,9 @@ the test needs::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_someenv.py s [100%]
|
test_someenv.py s [100%]
|
||||||
|
|
||||||
======================== 1 skipped in 0.12 seconds =========================
|
======================== 1 skipped in 0.12 seconds =========================
|
||||||
|
|
||||||
and here is one that specifies exactly the environment needed::
|
and here is one that specifies exactly the environment needed::
|
||||||
|
@ -364,30 +364,30 @@ and here is one that specifies exactly the environment needed::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_someenv.py . [100%]
|
test_someenv.py . [100%]
|
||||||
|
|
||||||
========================= 1 passed in 0.12 seconds =========================
|
========================= 1 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
The ``--markers`` option always gives you a list of available markers::
|
The ``--markers`` option always gives you a list of available markers::
|
||||||
|
|
||||||
$ pytest --markers
|
$ pytest --markers
|
||||||
@pytest.mark.env(name): mark test to run only on named environment
|
@pytest.mark.env(name): mark test to run only on named environment
|
||||||
|
|
||||||
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
|
@pytest.mark.skip(reason=None): skip the given test function with an optional reason. Example: skip(reason="no way of currently testing this") skips the test.
|
||||||
|
|
||||||
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
|
@pytest.mark.skipif(condition): skip the given test function if eval(condition) results in a True value. Evaluation happens within the module global context. Example: skipif('sys.platform == "win32"') skips the test if we are on the win32 platform. see http://pytest.org/latest/skipping.html
|
||||||
|
|
||||||
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
|
@pytest.mark.xfail(condition, reason=None, run=True, raises=None, strict=False): mark the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. If only specific exception(s) are expected, you can list them in raises, and if the test fails in other ways, it will be reported as a true failure. See http://pytest.org/latest/skipping.html
|
||||||
|
|
||||||
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
|
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
|
||||||
|
|
||||||
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
|
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
|
||||||
|
|
||||||
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
|
@pytest.mark.tryfirst: mark a hook implementation function such that the plugin machinery will try to call it first/as early as possible.
|
||||||
|
|
||||||
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
|
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
|
||||||
|
|
||||||
|
|
||||||
.. _`passing callables to custom markers`:
|
.. _`passing callables to custom markers`:
|
||||||
|
|
||||||
|
@ -523,11 +523,11 @@ then you will see two tests skipped and two executed tests as expected::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
test_plat.py s.s. [100%]
|
test_plat.py s.s. [100%]
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
SKIP [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux
|
SKIP [2] $REGENDOC_TMPDIR/conftest.py:12: cannot run on platform linux
|
||||||
|
|
||||||
=================== 2 passed, 2 skipped in 0.12 seconds ====================
|
=================== 2 passed, 2 skipped in 0.12 seconds ====================
|
||||||
|
|
||||||
Note that if you specify a platform via the marker-command line option like this::
|
Note that if you specify a platform via the marker-command line option like this::
|
||||||
|
@ -537,9 +537,9 @@ Note that if you specify a platform via the marker-command line option like this
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items / 3 deselected
|
collected 4 items / 3 deselected
|
||||||
|
|
||||||
test_plat.py . [100%]
|
test_plat.py . [100%]
|
||||||
|
|
||||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||||
|
|
||||||
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
|
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
|
||||||
|
@ -588,9 +588,9 @@ We can now use the ``-m option`` to select one set::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items / 2 deselected
|
collected 4 items / 2 deselected
|
||||||
|
|
||||||
test_module.py FF [100%]
|
test_module.py FF [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
__________________________ test_interface_simple ___________________________
|
__________________________ test_interface_simple ___________________________
|
||||||
test_module.py:3: in test_interface_simple
|
test_module.py:3: in test_interface_simple
|
||||||
|
@ -609,9 +609,9 @@ or to select both "event" and "interface" tests::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items / 1 deselected
|
collected 4 items / 1 deselected
|
||||||
|
|
||||||
test_module.py FFF [100%]
|
test_module.py FFF [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
__________________________ test_interface_simple ___________________________
|
__________________________ test_interface_simple ___________________________
|
||||||
test_module.py:3: in test_interface_simple
|
test_module.py:3: in test_interface_simple
|
||||||
|
|
|
@ -30,9 +30,9 @@ now execute the test specification::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_simple.yml F. [100%]
|
test_simple.yml F. [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
______________________________ usecase: hello ______________________________
|
______________________________ usecase: hello ______________________________
|
||||||
usecase execution failed
|
usecase execution failed
|
||||||
|
@ -63,10 +63,10 @@ consulted when reporting in ``verbose`` mode::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
|
|
||||||
test_simple.yml::hello FAILED [ 50%]
|
test_simple.yml::hello FAILED [ 50%]
|
||||||
test_simple.yml::ok PASSED [100%]
|
test_simple.yml::ok PASSED [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
______________________________ usecase: hello ______________________________
|
______________________________ usecase: hello ______________________________
|
||||||
usecase execution failed
|
usecase execution failed
|
||||||
|
@ -87,5 +87,5 @@ interesting to just look at the collection tree::
|
||||||
<YamlFile 'test_simple.yml'>
|
<YamlFile 'test_simple.yml'>
|
||||||
<YamlItem 'hello'>
|
<YamlItem 'hello'>
|
||||||
<YamlItem 'ok'>
|
<YamlItem 'ok'>
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
|
@ -55,13 +55,13 @@ let's run the full monty::
|
||||||
....F [100%]
|
....F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_____________________________ test_compute[4] ______________________________
|
_____________________________ test_compute[4] ______________________________
|
||||||
|
|
||||||
param1 = 4
|
param1 = 4
|
||||||
|
|
||||||
def test_compute(param1):
|
def test_compute(param1):
|
||||||
> assert param1 < 4
|
> assert param1 < 4
|
||||||
E assert 4 < 4
|
E assert 4 < 4
|
||||||
|
|
||||||
test_compute.py:3: AssertionError
|
test_compute.py:3: AssertionError
|
||||||
1 failed, 4 passed in 0.12 seconds
|
1 failed, 4 passed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -151,7 +151,7 @@ objects, they are still using the default pytest representation::
|
||||||
<Function 'test_timedistance_v2[20011211-20011212-expected1]'>
|
<Function 'test_timedistance_v2[20011211-20011212-expected1]'>
|
||||||
<Function 'test_timedistance_v3[forward]'>
|
<Function 'test_timedistance_v3[forward]'>
|
||||||
<Function 'test_timedistance_v3[backward]'>
|
<Function 'test_timedistance_v3[backward]'>
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs
|
In ``test_timedistance_v3``, we used ``pytest.param`` to specify the test IDs
|
||||||
|
@ -198,9 +198,9 @@ this is a fully self-contained example which you can run with::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
test_scenarios.py .... [100%]
|
test_scenarios.py .... [100%]
|
||||||
|
|
||||||
========================= 4 passed in 0.12 seconds =========================
|
========================= 4 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
|
If you just collect tests you'll also nicely see 'advanced' and 'basic' as variants for the test function::
|
||||||
|
@ -218,7 +218,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
|
||||||
<Function 'test_demo2[basic]'>
|
<Function 'test_demo2[basic]'>
|
||||||
<Function 'test_demo1[advanced]'>
|
<Function 'test_demo1[advanced]'>
|
||||||
<Function 'test_demo2[advanced]'>
|
<Function 'test_demo2[advanced]'>
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
Note that we told ``metafunc.parametrize()`` that your scenario values
|
Note that we told ``metafunc.parametrize()`` that your scenario values
|
||||||
|
@ -279,7 +279,7 @@ Let's first see how it looks like at collection time::
|
||||||
<Module 'test_backends.py'>
|
<Module 'test_backends.py'>
|
||||||
<Function 'test_db_initialized[d1]'>
|
<Function 'test_db_initialized[d1]'>
|
||||||
<Function 'test_db_initialized[d2]'>
|
<Function 'test_db_initialized[d2]'>
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
And then when we run the test::
|
And then when we run the test::
|
||||||
|
@ -288,15 +288,15 @@ And then when we run the test::
|
||||||
.F [100%]
|
.F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_________________________ test_db_initialized[d2] __________________________
|
_________________________ test_db_initialized[d2] __________________________
|
||||||
|
|
||||||
db = <conftest.DB2 object at 0xdeadbeef>
|
db = <conftest.DB2 object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_db_initialized(db):
|
def test_db_initialized(db):
|
||||||
# a dummy test
|
# a dummy test
|
||||||
if db.__class__.__name__ == "DB2":
|
if db.__class__.__name__ == "DB2":
|
||||||
> pytest.fail("deliberately failing for demo purposes")
|
> pytest.fail("deliberately failing for demo purposes")
|
||||||
E Failed: deliberately failing for demo purposes
|
E Failed: deliberately failing for demo purposes
|
||||||
|
|
||||||
test_backends.py:6: Failed
|
test_backends.py:6: Failed
|
||||||
1 failed, 1 passed in 0.12 seconds
|
1 failed, 1 passed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -339,7 +339,7 @@ The result of this test will be successful::
|
||||||
collected 1 item
|
collected 1 item
|
||||||
<Module 'test_indirect_list.py'>
|
<Module 'test_indirect_list.py'>
|
||||||
<Function 'test_indirect[a-b]'>
|
<Function 'test_indirect[a-b]'>
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. regendoc:wipe
|
.. regendoc:wipe
|
||||||
|
@ -384,13 +384,13 @@ argument sets to use for each test function. Let's run it::
|
||||||
F.. [100%]
|
F.. [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
________________________ TestClass.test_equals[1-2] ________________________
|
________________________ TestClass.test_equals[1-2] ________________________
|
||||||
|
|
||||||
self = <test_parametrize.TestClass object at 0xdeadbeef>, a = 1, b = 2
|
self = <test_parametrize.TestClass object at 0xdeadbeef>, a = 1, b = 2
|
||||||
|
|
||||||
def test_equals(self, a, b):
|
def test_equals(self, a, b):
|
||||||
> assert a == b
|
> assert a == b
|
||||||
E assert 1 == 2
|
E assert 1 == 2
|
||||||
|
|
||||||
test_parametrize.py:18: AssertionError
|
test_parametrize.py:18: AssertionError
|
||||||
1 failed, 2 passed in 0.12 seconds
|
1 failed, 2 passed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -462,11 +462,11 @@ If you run this with reporting for skips enabled::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py .s [100%]
|
test_module.py .s [100%]
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
SKIP [1] $REGENDOC_TMPDIR/conftest.py:11: could not import 'opt2'
|
SKIP [1] $REGENDOC_TMPDIR/conftest.py:11: could not import 'opt2'
|
||||||
|
|
||||||
=================== 1 passed, 1 skipped in 0.12 seconds ====================
|
=================== 1 passed, 1 skipped in 0.12 seconds ====================
|
||||||
|
|
||||||
You'll see that we don't have an ``opt2`` module and thus the second test run
|
You'll see that we don't have an ``opt2`` module and thus the second test run
|
||||||
|
@ -504,10 +504,10 @@ For example::
|
||||||
])
|
])
|
||||||
def test_eval(test_input, expected):
|
def test_eval(test_input, expected):
|
||||||
assert eval(test_input) == expected
|
assert eval(test_input) == expected
|
||||||
|
|
||||||
In this example, we have 4 parametrized tests. Except for the first test,
|
In this example, we have 4 parametrized tests. Except for the first test,
|
||||||
we mark the rest three parametrized tests with the custom marker ``basic``,
|
we mark the rest three parametrized tests with the custom marker ``basic``,
|
||||||
and for the fourth test we also use the built-in mark ``xfail`` to indicate this
|
and for the fourth test we also use the built-in mark ``xfail`` to indicate this
|
||||||
test is expected to fail. For explicitness, we set test ids for some tests.
|
test is expected to fail. For explicitness, we set test ids for some tests.
|
||||||
|
|
||||||
Then run ``pytest`` with verbose mode and with only the ``basic`` marker::
|
Then run ``pytest`` with verbose mode and with only the ``basic`` marker::
|
||||||
|
|
|
@ -133,7 +133,7 @@ then the test collection looks like this::
|
||||||
<Instance '()'>
|
<Instance '()'>
|
||||||
<Function 'simple_check'>
|
<Function 'simple_check'>
|
||||||
<Function 'complex_check'>
|
<Function 'complex_check'>
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
@ -180,7 +180,7 @@ You can always peek at the collection tree without running tests like this::
|
||||||
<Instance '()'>
|
<Instance '()'>
|
||||||
<Function 'test_method'>
|
<Function 'test_method'>
|
||||||
<Function 'test_anothermethod'>
|
<Function 'test_anothermethod'>
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. _customizing-test-collection:
|
.. _customizing-test-collection:
|
||||||
|
@ -243,5 +243,5 @@ file will be left out::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
rootdir: $REGENDOC_TMPDIR, inifile: pytest.ini
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
|
@ -14,82 +14,82 @@ get on the terminal - we are working on that)::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR/assertion, inifile:
|
rootdir: $REGENDOC_TMPDIR/assertion, inifile:
|
||||||
collected 42 items
|
collected 42 items
|
||||||
|
|
||||||
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [100%]
|
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
____________________________ test_generative[0] ____________________________
|
____________________________ test_generative[0] ____________________________
|
||||||
|
|
||||||
param1 = 3, param2 = 6
|
param1 = 3, param2 = 6
|
||||||
|
|
||||||
def test_generative(param1, param2):
|
def test_generative(param1, param2):
|
||||||
> assert param1 * 2 < param2
|
> assert param1 * 2 < param2
|
||||||
E assert (3 * 2) < 6
|
E assert (3 * 2) < 6
|
||||||
|
|
||||||
failure_demo.py:16: AssertionError
|
failure_demo.py:16: AssertionError
|
||||||
_________________________ TestFailing.test_simple __________________________
|
_________________________ TestFailing.test_simple __________________________
|
||||||
|
|
||||||
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_simple(self):
|
def test_simple(self):
|
||||||
def f():
|
def f():
|
||||||
return 42
|
return 42
|
||||||
def g():
|
def g():
|
||||||
return 43
|
return 43
|
||||||
|
|
||||||
> assert f() == g()
|
> assert f() == g()
|
||||||
E assert 42 == 43
|
E assert 42 == 43
|
||||||
E + where 42 = <function TestFailing.test_simple.<locals>.f at 0xdeadbeef>()
|
E + where 42 = <function TestFailing.test_simple.<locals>.f at 0xdeadbeef>()
|
||||||
E + and 43 = <function TestFailing.test_simple.<locals>.g at 0xdeadbeef>()
|
E + and 43 = <function TestFailing.test_simple.<locals>.g at 0xdeadbeef>()
|
||||||
|
|
||||||
failure_demo.py:29: AssertionError
|
failure_demo.py:29: AssertionError
|
||||||
____________________ TestFailing.test_simple_multiline _____________________
|
____________________ TestFailing.test_simple_multiline _____________________
|
||||||
|
|
||||||
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_simple_multiline(self):
|
def test_simple_multiline(self):
|
||||||
otherfunc_multi(
|
otherfunc_multi(
|
||||||
42,
|
42,
|
||||||
> 6*9)
|
> 6*9)
|
||||||
|
|
||||||
failure_demo.py:34:
|
failure_demo.py:34:
|
||||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||||
|
|
||||||
a = 42, b = 54
|
a = 42, b = 54
|
||||||
|
|
||||||
def otherfunc_multi(a,b):
|
def otherfunc_multi(a,b):
|
||||||
> assert (a ==
|
> assert (a ==
|
||||||
b)
|
b)
|
||||||
E assert 42 == 54
|
E assert 42 == 54
|
||||||
|
|
||||||
failure_demo.py:12: AssertionError
|
failure_demo.py:12: AssertionError
|
||||||
___________________________ TestFailing.test_not ___________________________
|
___________________________ TestFailing.test_not ___________________________
|
||||||
|
|
||||||
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
self = <failure_demo.TestFailing object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_not(self):
|
def test_not(self):
|
||||||
def f():
|
def f():
|
||||||
return 42
|
return 42
|
||||||
> assert not f()
|
> assert not f()
|
||||||
E assert not 42
|
E assert not 42
|
||||||
E + where 42 = <function TestFailing.test_not.<locals>.f at 0xdeadbeef>()
|
E + where 42 = <function TestFailing.test_not.<locals>.f at 0xdeadbeef>()
|
||||||
|
|
||||||
failure_demo.py:39: AssertionError
|
failure_demo.py:39: AssertionError
|
||||||
_________________ TestSpecialisedExplanations.test_eq_text _________________
|
_________________ TestSpecialisedExplanations.test_eq_text _________________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_text(self):
|
def test_eq_text(self):
|
||||||
> assert 'spam' == 'eggs'
|
> assert 'spam' == 'eggs'
|
||||||
E AssertionError: assert 'spam' == 'eggs'
|
E AssertionError: assert 'spam' == 'eggs'
|
||||||
E - spam
|
E - spam
|
||||||
E + eggs
|
E + eggs
|
||||||
|
|
||||||
failure_demo.py:43: AssertionError
|
failure_demo.py:43: AssertionError
|
||||||
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
|
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_similar_text(self):
|
def test_eq_similar_text(self):
|
||||||
> assert 'foo 1 bar' == 'foo 2 bar'
|
> assert 'foo 1 bar' == 'foo 2 bar'
|
||||||
E AssertionError: assert 'foo 1 bar' == 'foo 2 bar'
|
E AssertionError: assert 'foo 1 bar' == 'foo 2 bar'
|
||||||
|
@ -97,12 +97,12 @@ get on the terminal - we are working on that)::
|
||||||
E ? ^
|
E ? ^
|
||||||
E + foo 2 bar
|
E + foo 2 bar
|
||||||
E ? ^
|
E ? ^
|
||||||
|
|
||||||
failure_demo.py:46: AssertionError
|
failure_demo.py:46: AssertionError
|
||||||
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
|
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_multiline_text(self):
|
def test_eq_multiline_text(self):
|
||||||
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
|
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
|
||||||
E AssertionError: assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
|
E AssertionError: assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
|
||||||
|
@ -110,12 +110,12 @@ get on the terminal - we are working on that)::
|
||||||
E - spam
|
E - spam
|
||||||
E + eggs
|
E + eggs
|
||||||
E bar
|
E bar
|
||||||
|
|
||||||
failure_demo.py:49: AssertionError
|
failure_demo.py:49: AssertionError
|
||||||
______________ TestSpecialisedExplanations.test_eq_long_text _______________
|
______________ TestSpecialisedExplanations.test_eq_long_text _______________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_long_text(self):
|
def test_eq_long_text(self):
|
||||||
a = '1'*100 + 'a' + '2'*100
|
a = '1'*100 + 'a' + '2'*100
|
||||||
b = '1'*100 + 'b' + '2'*100
|
b = '1'*100 + 'b' + '2'*100
|
||||||
|
@ -127,12 +127,12 @@ get on the terminal - we are working on that)::
|
||||||
E ? ^
|
E ? ^
|
||||||
E + 1111111111b222222222
|
E + 1111111111b222222222
|
||||||
E ? ^
|
E ? ^
|
||||||
|
|
||||||
failure_demo.py:54: AssertionError
|
failure_demo.py:54: AssertionError
|
||||||
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
|
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_long_text_multiline(self):
|
def test_eq_long_text_multiline(self):
|
||||||
a = '1\n'*100 + 'a' + '2\n'*100
|
a = '1\n'*100 + 'a' + '2\n'*100
|
||||||
b = '1\n'*100 + 'b' + '2\n'*100
|
b = '1\n'*100 + 'b' + '2\n'*100
|
||||||
|
@ -145,25 +145,25 @@ get on the terminal - we are working on that)::
|
||||||
E 1
|
E 1
|
||||||
E 1
|
E 1
|
||||||
E 1...
|
E 1...
|
||||||
E
|
E
|
||||||
E ...Full output truncated (7 lines hidden), use '-vv' to show
|
E ...Full output truncated (7 lines hidden), use '-vv' to show
|
||||||
|
|
||||||
failure_demo.py:59: AssertionError
|
failure_demo.py:59: AssertionError
|
||||||
_________________ TestSpecialisedExplanations.test_eq_list _________________
|
_________________ TestSpecialisedExplanations.test_eq_list _________________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_list(self):
|
def test_eq_list(self):
|
||||||
> assert [0, 1, 2] == [0, 1, 3]
|
> assert [0, 1, 2] == [0, 1, 3]
|
||||||
E assert [0, 1, 2] == [0, 1, 3]
|
E assert [0, 1, 2] == [0, 1, 3]
|
||||||
E At index 2 diff: 2 != 3
|
E At index 2 diff: 2 != 3
|
||||||
E Use -v to get the full diff
|
E Use -v to get the full diff
|
||||||
|
|
||||||
failure_demo.py:62: AssertionError
|
failure_demo.py:62: AssertionError
|
||||||
______________ TestSpecialisedExplanations.test_eq_list_long _______________
|
______________ TestSpecialisedExplanations.test_eq_list_long _______________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_list_long(self):
|
def test_eq_list_long(self):
|
||||||
a = [0]*100 + [1] + [3]*100
|
a = [0]*100 + [1] + [3]*100
|
||||||
b = [0]*100 + [2] + [3]*100
|
b = [0]*100 + [2] + [3]*100
|
||||||
|
@ -171,12 +171,12 @@ get on the terminal - we are working on that)::
|
||||||
E assert [0, 0, 0, 0, 0, 0, ...] == [0, 0, 0, 0, 0, 0, ...]
|
E assert [0, 0, 0, 0, 0, 0, ...] == [0, 0, 0, 0, 0, 0, ...]
|
||||||
E At index 100 diff: 1 != 2
|
E At index 100 diff: 1 != 2
|
||||||
E Use -v to get the full diff
|
E Use -v to get the full diff
|
||||||
|
|
||||||
failure_demo.py:67: AssertionError
|
failure_demo.py:67: AssertionError
|
||||||
_________________ TestSpecialisedExplanations.test_eq_dict _________________
|
_________________ TestSpecialisedExplanations.test_eq_dict _________________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_dict(self):
|
def test_eq_dict(self):
|
||||||
> assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
|
> assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
|
||||||
E AssertionError: assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
|
E AssertionError: assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
|
||||||
|
@ -187,14 +187,14 @@ get on the terminal - we are working on that)::
|
||||||
E {'c': 0}
|
E {'c': 0}
|
||||||
E Right contains more items:
|
E Right contains more items:
|
||||||
E {'d': 0}...
|
E {'d': 0}...
|
||||||
E
|
E
|
||||||
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
||||||
|
|
||||||
failure_demo.py:70: AssertionError
|
failure_demo.py:70: AssertionError
|
||||||
_________________ TestSpecialisedExplanations.test_eq_set __________________
|
_________________ TestSpecialisedExplanations.test_eq_set __________________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_set(self):
|
def test_eq_set(self):
|
||||||
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
|
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
|
||||||
E AssertionError: assert {0, 10, 11, 12} == {0, 20, 21}
|
E AssertionError: assert {0, 10, 11, 12} == {0, 20, 21}
|
||||||
|
@ -205,34 +205,34 @@ get on the terminal - we are working on that)::
|
||||||
E Extra items in the right set:
|
E Extra items in the right set:
|
||||||
E 20
|
E 20
|
||||||
E 21...
|
E 21...
|
||||||
E
|
E
|
||||||
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
||||||
|
|
||||||
failure_demo.py:73: AssertionError
|
failure_demo.py:73: AssertionError
|
||||||
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
|
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_eq_longer_list(self):
|
def test_eq_longer_list(self):
|
||||||
> assert [1,2] == [1,2,3]
|
> assert [1,2] == [1,2,3]
|
||||||
E assert [1, 2] == [1, 2, 3]
|
E assert [1, 2] == [1, 2, 3]
|
||||||
E Right contains more items, first extra item: 3
|
E Right contains more items, first extra item: 3
|
||||||
E Use -v to get the full diff
|
E Use -v to get the full diff
|
||||||
|
|
||||||
failure_demo.py:76: AssertionError
|
failure_demo.py:76: AssertionError
|
||||||
_________________ TestSpecialisedExplanations.test_in_list _________________
|
_________________ TestSpecialisedExplanations.test_in_list _________________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_in_list(self):
|
def test_in_list(self):
|
||||||
> assert 1 in [0, 2, 3, 4, 5]
|
> assert 1 in [0, 2, 3, 4, 5]
|
||||||
E assert 1 in [0, 2, 3, 4, 5]
|
E assert 1 in [0, 2, 3, 4, 5]
|
||||||
|
|
||||||
failure_demo.py:79: AssertionError
|
failure_demo.py:79: AssertionError
|
||||||
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
|
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_not_in_text_multiline(self):
|
def test_not_in_text_multiline(self):
|
||||||
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
|
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
|
||||||
> assert 'foo' not in text
|
> assert 'foo' not in text
|
||||||
|
@ -244,14 +244,14 @@ get on the terminal - we are working on that)::
|
||||||
E includes foo
|
E includes foo
|
||||||
E ? +++
|
E ? +++
|
||||||
E and a...
|
E and a...
|
||||||
E
|
E
|
||||||
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
E ...Full output truncated (2 lines hidden), use '-vv' to show
|
||||||
|
|
||||||
failure_demo.py:83: AssertionError
|
failure_demo.py:83: AssertionError
|
||||||
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
|
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_not_in_text_single(self):
|
def test_not_in_text_single(self):
|
||||||
text = 'single foo line'
|
text = 'single foo line'
|
||||||
> assert 'foo' not in text
|
> assert 'foo' not in text
|
||||||
|
@ -259,36 +259,36 @@ get on the terminal - we are working on that)::
|
||||||
E 'foo' is contained here:
|
E 'foo' is contained here:
|
||||||
E single foo line
|
E single foo line
|
||||||
E ? +++
|
E ? +++
|
||||||
|
|
||||||
failure_demo.py:87: AssertionError
|
failure_demo.py:87: AssertionError
|
||||||
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
|
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_not_in_text_single_long(self):
|
def test_not_in_text_single_long(self):
|
||||||
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
|
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
|
||||||
> assert 'foo' not in text
|
> assert 'foo' not in text
|
||||||
E AssertionError: assert 'foo' not in 'head head head head hea...ail tail tail tail tail '
|
E AssertionError: assert 'foo' not in 'head head head head hea...ail tail tail tail tail '
|
||||||
E 'foo' is contained here:
|
E 'foo' is contained here:
|
||||||
E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
|
E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
|
||||||
E ? +++
|
E ? +++
|
||||||
|
|
||||||
failure_demo.py:91: AssertionError
|
failure_demo.py:91: AssertionError
|
||||||
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
|
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
|
||||||
|
|
||||||
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
self = <failure_demo.TestSpecialisedExplanations object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_not_in_text_single_long_term(self):
|
def test_not_in_text_single_long_term(self):
|
||||||
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
|
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
|
||||||
> assert 'f'*70 not in text
|
> assert 'f'*70 not in text
|
||||||
E AssertionError: assert 'fffffffffff...ffffffffffff' not in 'head head he...l tail tail '
|
E AssertionError: assert 'fffffffffff...ffffffffffff' not in 'head head he...l tail tail '
|
||||||
E 'ffffffffffffffffff...fffffffffffffffffff' is contained here:
|
E 'ffffffffffffffffff...fffffffffffffffffff' is contained here:
|
||||||
E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
|
E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
|
||||||
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
|
||||||
|
|
||||||
failure_demo.py:95: AssertionError
|
failure_demo.py:95: AssertionError
|
||||||
______________________________ test_attribute ______________________________
|
______________________________ test_attribute ______________________________
|
||||||
|
|
||||||
def test_attribute():
|
def test_attribute():
|
||||||
class Foo(object):
|
class Foo(object):
|
||||||
b = 1
|
b = 1
|
||||||
|
@ -296,10 +296,10 @@ get on the terminal - we are working on that)::
|
||||||
> assert i.b == 2
|
> assert i.b == 2
|
||||||
E assert 1 == 2
|
E assert 1 == 2
|
||||||
E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0xdeadbeef>.b
|
E + where 1 = <failure_demo.test_attribute.<locals>.Foo object at 0xdeadbeef>.b
|
||||||
|
|
||||||
failure_demo.py:102: AssertionError
|
failure_demo.py:102: AssertionError
|
||||||
_________________________ test_attribute_instance __________________________
|
_________________________ test_attribute_instance __________________________
|
||||||
|
|
||||||
def test_attribute_instance():
|
def test_attribute_instance():
|
||||||
class Foo(object):
|
class Foo(object):
|
||||||
b = 1
|
b = 1
|
||||||
|
@ -307,10 +307,10 @@ get on the terminal - we are working on that)::
|
||||||
E AssertionError: assert 1 == 2
|
E AssertionError: assert 1 == 2
|
||||||
E + where 1 = <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef>.b
|
E + where 1 = <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef>.b
|
||||||
E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>()
|
E + where <failure_demo.test_attribute_instance.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_instance.<locals>.Foo'>()
|
||||||
|
|
||||||
failure_demo.py:108: AssertionError
|
failure_demo.py:108: AssertionError
|
||||||
__________________________ test_attribute_failure __________________________
|
__________________________ test_attribute_failure __________________________
|
||||||
|
|
||||||
def test_attribute_failure():
|
def test_attribute_failure():
|
||||||
class Foo(object):
|
class Foo(object):
|
||||||
def _get_b(self):
|
def _get_b(self):
|
||||||
|
@ -318,19 +318,19 @@ get on the terminal - we are working on that)::
|
||||||
b = property(_get_b)
|
b = property(_get_b)
|
||||||
i = Foo()
|
i = Foo()
|
||||||
> assert i.b == 2
|
> assert i.b == 2
|
||||||
|
|
||||||
failure_demo.py:117:
|
failure_demo.py:117:
|
||||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||||
|
|
||||||
self = <failure_demo.test_attribute_failure.<locals>.Foo object at 0xdeadbeef>
|
self = <failure_demo.test_attribute_failure.<locals>.Foo object at 0xdeadbeef>
|
||||||
|
|
||||||
def _get_b(self):
|
def _get_b(self):
|
||||||
> raise Exception('Failed to get attrib')
|
> raise Exception('Failed to get attrib')
|
||||||
E Exception: Failed to get attrib
|
E Exception: Failed to get attrib
|
||||||
|
|
||||||
failure_demo.py:114: Exception
|
failure_demo.py:114: Exception
|
||||||
_________________________ test_attribute_multiple __________________________
|
_________________________ test_attribute_multiple __________________________
|
||||||
|
|
||||||
def test_attribute_multiple():
|
def test_attribute_multiple():
|
||||||
class Foo(object):
|
class Foo(object):
|
||||||
b = 1
|
b = 1
|
||||||
|
@ -342,74 +342,74 @@ get on the terminal - we are working on that)::
|
||||||
E + where <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Foo'>()
|
E + where <failure_demo.test_attribute_multiple.<locals>.Foo object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Foo'>()
|
||||||
E + and 2 = <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef>.b
|
E + and 2 = <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef>.b
|
||||||
E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>()
|
E + where <failure_demo.test_attribute_multiple.<locals>.Bar object at 0xdeadbeef> = <class 'failure_demo.test_attribute_multiple.<locals>.Bar'>()
|
||||||
|
|
||||||
failure_demo.py:125: AssertionError
|
failure_demo.py:125: AssertionError
|
||||||
__________________________ TestRaises.test_raises __________________________
|
__________________________ TestRaises.test_raises __________________________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_raises(self):
|
def test_raises(self):
|
||||||
s = 'qwe'
|
s = 'qwe'
|
||||||
> raises(TypeError, "int(s)")
|
> raises(TypeError, "int(s)")
|
||||||
|
|
||||||
failure_demo.py:134:
|
failure_demo.py:134:
|
||||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||||
|
|
||||||
> int(s)
|
> int(s)
|
||||||
E ValueError: invalid literal for int() with base 10: 'qwe'
|
E ValueError: invalid literal for int() with base 10: 'qwe'
|
||||||
|
|
||||||
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:615>:1: ValueError
|
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:615>:1: ValueError
|
||||||
______________________ TestRaises.test_raises_doesnt _______________________
|
______________________ TestRaises.test_raises_doesnt _______________________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_raises_doesnt(self):
|
def test_raises_doesnt(self):
|
||||||
> raises(IOError, "int('3')")
|
> raises(IOError, "int('3')")
|
||||||
E Failed: DID NOT RAISE <class 'OSError'>
|
E Failed: DID NOT RAISE <class 'OSError'>
|
||||||
|
|
||||||
failure_demo.py:137: Failed
|
failure_demo.py:137: Failed
|
||||||
__________________________ TestRaises.test_raise ___________________________
|
__________________________ TestRaises.test_raise ___________________________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_raise(self):
|
def test_raise(self):
|
||||||
> raise ValueError("demo error")
|
> raise ValueError("demo error")
|
||||||
E ValueError: demo error
|
E ValueError: demo error
|
||||||
|
|
||||||
failure_demo.py:140: ValueError
|
failure_demo.py:140: ValueError
|
||||||
________________________ TestRaises.test_tupleerror ________________________
|
________________________ TestRaises.test_tupleerror ________________________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_tupleerror(self):
|
def test_tupleerror(self):
|
||||||
> a,b = [1]
|
> a,b = [1]
|
||||||
E ValueError: not enough values to unpack (expected 2, got 1)
|
E ValueError: not enough values to unpack (expected 2, got 1)
|
||||||
|
|
||||||
failure_demo.py:143: ValueError
|
failure_demo.py:143: ValueError
|
||||||
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
|
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
|
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
|
||||||
l = [1,2,3]
|
l = [1,2,3]
|
||||||
print ("l is %r" % l)
|
print ("l is %r" % l)
|
||||||
> a,b = l.pop()
|
> a,b = l.pop()
|
||||||
E TypeError: 'int' object is not iterable
|
E TypeError: 'int' object is not iterable
|
||||||
|
|
||||||
failure_demo.py:148: TypeError
|
failure_demo.py:148: TypeError
|
||||||
--------------------------- Captured stdout call ---------------------------
|
--------------------------- Captured stdout call ---------------------------
|
||||||
l is [1, 2, 3]
|
l is [1, 2, 3]
|
||||||
________________________ TestRaises.test_some_error ________________________
|
________________________ TestRaises.test_some_error ________________________
|
||||||
|
|
||||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_some_error(self):
|
def test_some_error(self):
|
||||||
> if namenotexi:
|
> if namenotexi:
|
||||||
E NameError: name 'namenotexi' is not defined
|
E NameError: name 'namenotexi' is not defined
|
||||||
|
|
||||||
failure_demo.py:151: NameError
|
failure_demo.py:151: NameError
|
||||||
____________________ test_dynamic_compile_shows_nicely _____________________
|
____________________ test_dynamic_compile_shows_nicely _____________________
|
||||||
|
|
||||||
def test_dynamic_compile_shows_nicely():
|
def test_dynamic_compile_shows_nicely():
|
||||||
import imp
|
import imp
|
||||||
import sys
|
import sys
|
||||||
|
@ -420,63 +420,63 @@ get on the terminal - we are working on that)::
|
||||||
py.builtin.exec_(code, module.__dict__)
|
py.builtin.exec_(code, module.__dict__)
|
||||||
sys.modules[name] = module
|
sys.modules[name] = module
|
||||||
> module.foo()
|
> module.foo()
|
||||||
|
|
||||||
failure_demo.py:168:
|
failure_demo.py:168:
|
||||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||||
|
|
||||||
def foo():
|
def foo():
|
||||||
> assert 1 == 0
|
> assert 1 == 0
|
||||||
E AssertionError
|
E AssertionError
|
||||||
|
|
||||||
<2-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:165>:2: AssertionError
|
<2-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:165>:2: AssertionError
|
||||||
____________________ TestMoreErrors.test_complex_error _____________________
|
____________________ TestMoreErrors.test_complex_error _____________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_complex_error(self):
|
def test_complex_error(self):
|
||||||
def f():
|
def f():
|
||||||
return 44
|
return 44
|
||||||
def g():
|
def g():
|
||||||
return 43
|
return 43
|
||||||
> somefunc(f(), g())
|
> somefunc(f(), g())
|
||||||
|
|
||||||
failure_demo.py:178:
|
failure_demo.py:178:
|
||||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||||
failure_demo.py:9: in somefunc
|
failure_demo.py:9: in somefunc
|
||||||
otherfunc(x,y)
|
otherfunc(x,y)
|
||||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||||
|
|
||||||
a = 44, b = 43
|
a = 44, b = 43
|
||||||
|
|
||||||
def otherfunc(a,b):
|
def otherfunc(a,b):
|
||||||
> assert a==b
|
> assert a==b
|
||||||
E assert 44 == 43
|
E assert 44 == 43
|
||||||
|
|
||||||
failure_demo.py:6: AssertionError
|
failure_demo.py:6: AssertionError
|
||||||
___________________ TestMoreErrors.test_z1_unpack_error ____________________
|
___________________ TestMoreErrors.test_z1_unpack_error ____________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_z1_unpack_error(self):
|
def test_z1_unpack_error(self):
|
||||||
l = []
|
l = []
|
||||||
> a,b = l
|
> a,b = l
|
||||||
E ValueError: not enough values to unpack (expected 2, got 0)
|
E ValueError: not enough values to unpack (expected 2, got 0)
|
||||||
|
|
||||||
failure_demo.py:182: ValueError
|
failure_demo.py:182: ValueError
|
||||||
____________________ TestMoreErrors.test_z2_type_error _____________________
|
____________________ TestMoreErrors.test_z2_type_error _____________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_z2_type_error(self):
|
def test_z2_type_error(self):
|
||||||
l = 3
|
l = 3
|
||||||
> a,b = l
|
> a,b = l
|
||||||
E TypeError: 'int' object is not iterable
|
E TypeError: 'int' object is not iterable
|
||||||
|
|
||||||
failure_demo.py:186: TypeError
|
failure_demo.py:186: TypeError
|
||||||
______________________ TestMoreErrors.test_startswith ______________________
|
______________________ TestMoreErrors.test_startswith ______________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_startswith(self):
|
def test_startswith(self):
|
||||||
s = "123"
|
s = "123"
|
||||||
g = "456"
|
g = "456"
|
||||||
|
@ -484,12 +484,12 @@ get on the terminal - we are working on that)::
|
||||||
E AssertionError: assert False
|
E AssertionError: assert False
|
||||||
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456')
|
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456')
|
||||||
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
|
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
|
||||||
|
|
||||||
failure_demo.py:191: AssertionError
|
failure_demo.py:191: AssertionError
|
||||||
__________________ TestMoreErrors.test_startswith_nested ___________________
|
__________________ TestMoreErrors.test_startswith_nested ___________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_startswith_nested(self):
|
def test_startswith_nested(self):
|
||||||
def f():
|
def f():
|
||||||
return "123"
|
return "123"
|
||||||
|
@ -501,55 +501,55 @@ get on the terminal - we are working on that)::
|
||||||
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
|
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
|
||||||
E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0xdeadbeef>()
|
E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0xdeadbeef>()
|
||||||
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef>()
|
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef>()
|
||||||
|
|
||||||
failure_demo.py:198: AssertionError
|
failure_demo.py:198: AssertionError
|
||||||
_____________________ TestMoreErrors.test_global_func ______________________
|
_____________________ TestMoreErrors.test_global_func ______________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_global_func(self):
|
def test_global_func(self):
|
||||||
> assert isinstance(globf(42), float)
|
> assert isinstance(globf(42), float)
|
||||||
E assert False
|
E assert False
|
||||||
E + where False = isinstance(43, float)
|
E + where False = isinstance(43, float)
|
||||||
E + where 43 = globf(42)
|
E + where 43 = globf(42)
|
||||||
|
|
||||||
failure_demo.py:201: AssertionError
|
failure_demo.py:201: AssertionError
|
||||||
_______________________ TestMoreErrors.test_instance _______________________
|
_______________________ TestMoreErrors.test_instance _______________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_instance(self):
|
def test_instance(self):
|
||||||
self.x = 6*7
|
self.x = 6*7
|
||||||
> assert self.x != 42
|
> assert self.x != 42
|
||||||
E assert 42 != 42
|
E assert 42 != 42
|
||||||
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x
|
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x
|
||||||
|
|
||||||
failure_demo.py:205: AssertionError
|
failure_demo.py:205: AssertionError
|
||||||
_______________________ TestMoreErrors.test_compare ________________________
|
_______________________ TestMoreErrors.test_compare ________________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_compare(self):
|
def test_compare(self):
|
||||||
> assert globf(10) < 5
|
> assert globf(10) < 5
|
||||||
E assert 11 < 5
|
E assert 11 < 5
|
||||||
E + where 11 = globf(10)
|
E + where 11 = globf(10)
|
||||||
|
|
||||||
failure_demo.py:208: AssertionError
|
failure_demo.py:208: AssertionError
|
||||||
_____________________ TestMoreErrors.test_try_finally ______________________
|
_____________________ TestMoreErrors.test_try_finally ______________________
|
||||||
|
|
||||||
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_try_finally(self):
|
def test_try_finally(self):
|
||||||
x = 1
|
x = 1
|
||||||
try:
|
try:
|
||||||
> assert x == 0
|
> assert x == 0
|
||||||
E assert 1 == 0
|
E assert 1 == 0
|
||||||
|
|
||||||
failure_demo.py:213: AssertionError
|
failure_demo.py:213: AssertionError
|
||||||
___________________ TestCustomAssertMsg.test_single_line ___________________
|
___________________ TestCustomAssertMsg.test_single_line ___________________
|
||||||
|
|
||||||
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_single_line(self):
|
def test_single_line(self):
|
||||||
class A(object):
|
class A(object):
|
||||||
a = 1
|
a = 1
|
||||||
|
@ -558,12 +558,12 @@ get on the terminal - we are working on that)::
|
||||||
E AssertionError: A.a appears not to be b
|
E AssertionError: A.a appears not to be b
|
||||||
E assert 1 == 2
|
E assert 1 == 2
|
||||||
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line.<locals>.A'>.a
|
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line.<locals>.A'>.a
|
||||||
|
|
||||||
failure_demo.py:224: AssertionError
|
failure_demo.py:224: AssertionError
|
||||||
____________________ TestCustomAssertMsg.test_multiline ____________________
|
____________________ TestCustomAssertMsg.test_multiline ____________________
|
||||||
|
|
||||||
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_multiline(self):
|
def test_multiline(self):
|
||||||
class A(object):
|
class A(object):
|
||||||
a = 1
|
a = 1
|
||||||
|
@ -575,12 +575,12 @@ get on the terminal - we are working on that)::
|
||||||
E one of those
|
E one of those
|
||||||
E assert 1 == 2
|
E assert 1 == 2
|
||||||
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline.<locals>.A'>.a
|
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline.<locals>.A'>.a
|
||||||
|
|
||||||
failure_demo.py:230: AssertionError
|
failure_demo.py:230: AssertionError
|
||||||
___________________ TestCustomAssertMsg.test_custom_repr ___________________
|
___________________ TestCustomAssertMsg.test_custom_repr ___________________
|
||||||
|
|
||||||
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_custom_repr(self):
|
def test_custom_repr(self):
|
||||||
class JSON(object):
|
class JSON(object):
|
||||||
a = 1
|
a = 1
|
||||||
|
@ -595,12 +595,12 @@ get on the terminal - we are working on that)::
|
||||||
E }
|
E }
|
||||||
E assert 1 == 2
|
E assert 1 == 2
|
||||||
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
|
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
|
||||||
|
|
||||||
failure_demo.py:240: AssertionError
|
failure_demo.py:240: AssertionError
|
||||||
============================= warnings summary =============================
|
============================= warnings summary =============================
|
||||||
None
|
None
|
||||||
Metafunc.addcall is deprecated and scheduled to be removed in pytest 4.0.
|
Metafunc.addcall is deprecated and scheduled to be removed in pytest 4.0.
|
||||||
Please use Metafunc.parametrize instead.
|
Please use Metafunc.parametrize instead.
|
||||||
|
|
||||||
-- Docs: http://doc.pytest.org/en/latest/warnings.html
|
-- Docs: http://doc.pytest.org/en/latest/warnings.html
|
||||||
================== 42 failed, 1 warnings in 0.12 seconds ===================
|
================== 42 failed, 1 warnings in 0.12 seconds ===================
|
||||||
|
|
|
@ -46,9 +46,9 @@ Let's run this without supplying our new option::
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_answer ________________________________
|
_______________________________ test_answer ________________________________
|
||||||
|
|
||||||
cmdopt = 'type1'
|
cmdopt = 'type1'
|
||||||
|
|
||||||
def test_answer(cmdopt):
|
def test_answer(cmdopt):
|
||||||
if cmdopt == "type1":
|
if cmdopt == "type1":
|
||||||
print ("first")
|
print ("first")
|
||||||
|
@ -56,7 +56,7 @@ Let's run this without supplying our new option::
|
||||||
print ("second")
|
print ("second")
|
||||||
> assert 0 # to see what was printed
|
> assert 0 # to see what was printed
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_sample.py:6: AssertionError
|
test_sample.py:6: AssertionError
|
||||||
--------------------------- Captured stdout call ---------------------------
|
--------------------------- Captured stdout call ---------------------------
|
||||||
first
|
first
|
||||||
|
@ -68,9 +68,9 @@ And now with supplying a command line option::
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_answer ________________________________
|
_______________________________ test_answer ________________________________
|
||||||
|
|
||||||
cmdopt = 'type2'
|
cmdopt = 'type2'
|
||||||
|
|
||||||
def test_answer(cmdopt):
|
def test_answer(cmdopt):
|
||||||
if cmdopt == "type1":
|
if cmdopt == "type1":
|
||||||
print ("first")
|
print ("first")
|
||||||
|
@ -78,7 +78,7 @@ And now with supplying a command line option::
|
||||||
print ("second")
|
print ("second")
|
||||||
> assert 0 # to see what was printed
|
> assert 0 # to see what was printed
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_sample.py:6: AssertionError
|
test_sample.py:6: AssertionError
|
||||||
--------------------------- Captured stdout call ---------------------------
|
--------------------------- Captured stdout call ---------------------------
|
||||||
second
|
second
|
||||||
|
@ -118,7 +118,7 @@ directory with the above conftest.py::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. _`excontrolskip`:
|
.. _`excontrolskip`:
|
||||||
|
@ -172,11 +172,11 @@ and when running it will see a skipped "slow" test::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py .s [100%]
|
test_module.py .s [100%]
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
SKIP [1] test_module.py:8: need --runslow option to run
|
SKIP [1] test_module.py:8: need --runslow option to run
|
||||||
|
|
||||||
=================== 1 passed, 1 skipped in 0.12 seconds ====================
|
=================== 1 passed, 1 skipped in 0.12 seconds ====================
|
||||||
|
|
||||||
Or run it including the ``slow`` marked test::
|
Or run it including the ``slow`` marked test::
|
||||||
|
@ -186,9 +186,9 @@ Or run it including the ``slow`` marked test::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py .. [100%]
|
test_module.py .. [100%]
|
||||||
|
|
||||||
========================= 2 passed in 0.12 seconds =========================
|
========================= 2 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
Writing well integrated assertion helpers
|
Writing well integrated assertion helpers
|
||||||
|
@ -223,11 +223,11 @@ Let's run our little function::
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
______________________________ test_something ______________________________
|
______________________________ test_something ______________________________
|
||||||
|
|
||||||
def test_something():
|
def test_something():
|
||||||
> checkconfig(42)
|
> checkconfig(42)
|
||||||
E Failed: not configured: 42
|
E Failed: not configured: 42
|
||||||
|
|
||||||
test_checkconfig.py:8: Failed
|
test_checkconfig.py:8: Failed
|
||||||
1 failed in 0.12 seconds
|
1 failed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -312,7 +312,7 @@ which will add the string to the test header accordingly::
|
||||||
project deps: mylib-1.1
|
project deps: mylib-1.1
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. regendoc:wipe
|
.. regendoc:wipe
|
||||||
|
@ -339,7 +339,7 @@ which will add info only when run with "--v"::
|
||||||
did you?
|
did you?
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 0 items
|
collecting ... collected 0 items
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
and nothing when run plainly::
|
and nothing when run plainly::
|
||||||
|
@ -349,7 +349,7 @@ and nothing when run plainly::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 0 items
|
collected 0 items
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
profiling test duration
|
profiling test duration
|
||||||
|
@ -383,9 +383,9 @@ Now we can profile which test functions execute the slowest::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
|
||||||
test_some_are_slow.py ... [100%]
|
test_some_are_slow.py ... [100%]
|
||||||
|
|
||||||
========================= slowest 3 test durations =========================
|
========================= slowest 3 test durations =========================
|
||||||
0.30s call test_some_are_slow.py::test_funcslow2
|
0.30s call test_some_are_slow.py::test_funcslow2
|
||||||
0.20s call test_some_are_slow.py::test_funcslow1
|
0.20s call test_some_are_slow.py::test_funcslow1
|
||||||
|
@ -449,18 +449,18 @@ If we run this::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 4 items
|
collected 4 items
|
||||||
|
|
||||||
test_step.py .Fx. [100%]
|
test_step.py .Fx. [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
____________________ TestUserHandling.test_modification ____________________
|
____________________ TestUserHandling.test_modification ____________________
|
||||||
|
|
||||||
self = <test_step.TestUserHandling object at 0xdeadbeef>
|
self = <test_step.TestUserHandling object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_modification(self):
|
def test_modification(self):
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_step.py:9: AssertionError
|
test_step.py:9: AssertionError
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
XFAIL test_step.py::TestUserHandling::()::test_deletion
|
XFAIL test_step.py::TestUserHandling::()::test_deletion
|
||||||
|
@ -528,12 +528,12 @@ We can run this::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 7 items
|
collected 7 items
|
||||||
|
|
||||||
test_step.py .Fx. [ 57%]
|
test_step.py .Fx. [ 57%]
|
||||||
a/test_db.py F [ 71%]
|
a/test_db.py F [ 71%]
|
||||||
a/test_db2.py F [ 85%]
|
a/test_db2.py F [ 85%]
|
||||||
b/test_error.py E [100%]
|
b/test_error.py E [100%]
|
||||||
|
|
||||||
================================== ERRORS ==================================
|
================================== ERRORS ==================================
|
||||||
_______________________ ERROR at setup of test_root ________________________
|
_______________________ ERROR at setup of test_root ________________________
|
||||||
file $REGENDOC_TMPDIR/b/test_error.py, line 1
|
file $REGENDOC_TMPDIR/b/test_error.py, line 1
|
||||||
|
@ -541,37 +541,37 @@ We can run this::
|
||||||
E fixture 'db' not found
|
E fixture 'db' not found
|
||||||
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_xml_attribute, record_xml_property, recwarn, tmpdir, tmpdir_factory
|
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_xml_attribute, record_xml_property, recwarn, tmpdir, tmpdir_factory
|
||||||
> use 'pytest --fixtures [testpath]' for help on them.
|
> use 'pytest --fixtures [testpath]' for help on them.
|
||||||
|
|
||||||
$REGENDOC_TMPDIR/b/test_error.py:1
|
$REGENDOC_TMPDIR/b/test_error.py:1
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
____________________ TestUserHandling.test_modification ____________________
|
____________________ TestUserHandling.test_modification ____________________
|
||||||
|
|
||||||
self = <test_step.TestUserHandling object at 0xdeadbeef>
|
self = <test_step.TestUserHandling object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_modification(self):
|
def test_modification(self):
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_step.py:9: AssertionError
|
test_step.py:9: AssertionError
|
||||||
_________________________________ test_a1 __________________________________
|
_________________________________ test_a1 __________________________________
|
||||||
|
|
||||||
db = <conftest.DB object at 0xdeadbeef>
|
db = <conftest.DB object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_a1(db):
|
def test_a1(db):
|
||||||
> assert 0, db # to show value
|
> assert 0, db # to show value
|
||||||
E AssertionError: <conftest.DB object at 0xdeadbeef>
|
E AssertionError: <conftest.DB object at 0xdeadbeef>
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
a/test_db.py:2: AssertionError
|
a/test_db.py:2: AssertionError
|
||||||
_________________________________ test_a2 __________________________________
|
_________________________________ test_a2 __________________________________
|
||||||
|
|
||||||
db = <conftest.DB object at 0xdeadbeef>
|
db = <conftest.DB object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_a2(db):
|
def test_a2(db):
|
||||||
> assert 0, db # to show value
|
> assert 0, db # to show value
|
||||||
E AssertionError: <conftest.DB object at 0xdeadbeef>
|
E AssertionError: <conftest.DB object at 0xdeadbeef>
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
a/test_db2.py:2: AssertionError
|
a/test_db2.py:2: AssertionError
|
||||||
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ==========
|
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.12 seconds ==========
|
||||||
|
|
||||||
|
@ -636,25 +636,25 @@ and run them::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py FF [100%]
|
test_module.py FF [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
________________________________ test_fail1 ________________________________
|
________________________________ test_fail1 ________________________________
|
||||||
|
|
||||||
tmpdir = local('PYTEST_TMPDIR/test_fail10')
|
tmpdir = local('PYTEST_TMPDIR/test_fail10')
|
||||||
|
|
||||||
def test_fail1(tmpdir):
|
def test_fail1(tmpdir):
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:2: AssertionError
|
test_module.py:2: AssertionError
|
||||||
________________________________ test_fail2 ________________________________
|
________________________________ test_fail2 ________________________________
|
||||||
|
|
||||||
def test_fail2():
|
def test_fail2():
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:4: AssertionError
|
test_module.py:4: AssertionError
|
||||||
========================= 2 failed in 0.12 seconds =========================
|
========================= 2 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
|
@ -730,36 +730,36 @@ and run it::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
|
||||||
test_module.py Esetting up a test failed! test_module.py::test_setup_fails
|
test_module.py Esetting up a test failed! test_module.py::test_setup_fails
|
||||||
Fexecuting test failed test_module.py::test_call_fails
|
Fexecuting test failed test_module.py::test_call_fails
|
||||||
F
|
F
|
||||||
|
|
||||||
================================== ERRORS ==================================
|
================================== ERRORS ==================================
|
||||||
____________________ ERROR at setup of test_setup_fails ____________________
|
____________________ ERROR at setup of test_setup_fails ____________________
|
||||||
|
|
||||||
@pytest.fixture
|
@pytest.fixture
|
||||||
def other():
|
def other():
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:6: AssertionError
|
test_module.py:6: AssertionError
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_____________________________ test_call_fails ______________________________
|
_____________________________ test_call_fails ______________________________
|
||||||
|
|
||||||
something = None
|
something = None
|
||||||
|
|
||||||
def test_call_fails(something):
|
def test_call_fails(something):
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:12: AssertionError
|
test_module.py:12: AssertionError
|
||||||
________________________________ test_fail2 ________________________________
|
________________________________ test_fail2 ________________________________
|
||||||
|
|
||||||
def test_fail2():
|
def test_fail2():
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:15: AssertionError
|
test_module.py:15: AssertionError
|
||||||
==================== 2 failed, 1 error in 0.12 seconds =====================
|
==================== 2 failed, 1 error in 0.12 seconds =====================
|
||||||
|
|
||||||
|
@ -809,7 +809,7 @@ In that order.
|
||||||
can be changed between releases (even bug fixes) so it shouldn't be relied on for scripting
|
can be changed between releases (even bug fixes) so it shouldn't be relied on for scripting
|
||||||
or automation.
|
or automation.
|
||||||
|
|
||||||
Freezing pytest
|
Freezing pytest
|
||||||
---------------
|
---------------
|
||||||
|
|
||||||
If you freeze your application using a tool like
|
If you freeze your application using a tool like
|
||||||
|
@ -821,18 +821,18 @@ while also allowing you to send test files to users so they can run them in thei
|
||||||
machines, which can be useful to obtain more information about a hard to reproduce bug.
|
machines, which can be useful to obtain more information about a hard to reproduce bug.
|
||||||
|
|
||||||
Fortunately recent ``PyInstaller`` releases already have a custom hook
|
Fortunately recent ``PyInstaller`` releases already have a custom hook
|
||||||
for pytest, but if you are using another tool to freeze executables
|
for pytest, but if you are using another tool to freeze executables
|
||||||
such as ``cx_freeze`` or ``py2exe``, you can use ``pytest.freeze_includes()``
|
such as ``cx_freeze`` or ``py2exe``, you can use ``pytest.freeze_includes()``
|
||||||
to obtain the full list of internal pytest modules. How to configure the tools
|
to obtain the full list of internal pytest modules. How to configure the tools
|
||||||
to find the internal modules varies from tool to tool, however.
|
to find the internal modules varies from tool to tool, however.
|
||||||
|
|
||||||
Instead of freezing the pytest runner as a separate executable, you can make
|
Instead of freezing the pytest runner as a separate executable, you can make
|
||||||
your frozen program work as the pytest runner by some clever
|
your frozen program work as the pytest runner by some clever
|
||||||
argument handling during program startup. This allows you to
|
argument handling during program startup. This allows you to
|
||||||
have a single executable, which is usually more convenient.
|
have a single executable, which is usually more convenient.
|
||||||
Please note that the mechanism for plugin discovery used by pytest
|
Please note that the mechanism for plugin discovery used by pytest
|
||||||
(setupttools entry points) doesn't work with frozen executables so pytest
|
(setupttools entry points) doesn't work with frozen executables so pytest
|
||||||
can't find any third party plugins automatically. To include third party plugins
|
can't find any third party plugins automatically. To include third party plugins
|
||||||
like ``pytest-timeout`` they must be imported explicitly and passed on to pytest.main.
|
like ``pytest-timeout`` they must be imported explicitly and passed on to pytest.main.
|
||||||
|
|
||||||
.. code-block:: python
|
.. code-block:: python
|
||||||
|
|
|
@ -73,20 +73,20 @@ marked ``smtp`` fixture function. Running the test looks like this::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_smtpsimple.py F [100%]
|
test_smtpsimple.py F [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
________________________________ test_ehlo _________________________________
|
________________________________ test_ehlo _________________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_ehlo(smtp):
|
def test_ehlo(smtp):
|
||||||
response, msg = smtp.ehlo()
|
response, msg = smtp.ehlo()
|
||||||
assert response == 250
|
assert response == 250
|
||||||
> assert 0 # for demo purposes
|
> assert 0 # for demo purposes
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_smtpsimple.py:11: AssertionError
|
test_smtpsimple.py:11: AssertionError
|
||||||
========================= 1 failed in 0.12 seconds =========================
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
|
@ -152,9 +152,9 @@ to do this is by loading these data in a fixture for use by your tests.
|
||||||
This makes use of the automatic caching mechanisms of pytest.
|
This makes use of the automatic caching mechanisms of pytest.
|
||||||
|
|
||||||
Another good approach is by adding the data files in the ``tests`` folder.
|
Another good approach is by adding the data files in the ``tests`` folder.
|
||||||
There are also community plugins available to help managing this aspect of
|
There are also community plugins available to help managing this aspect of
|
||||||
testing, e.g. `pytest-datadir <https://github.com/gabrielcnr/pytest-datadir>`__
|
testing, e.g. `pytest-datadir <https://github.com/gabrielcnr/pytest-datadir>`__
|
||||||
and `pytest-datafiles <https://pypi.org/project/pytest-datafiles/>`__.
|
and `pytest-datafiles <https://pypi.python.org/pypi/pytest-datafiles>`__.
|
||||||
|
|
||||||
.. _smtpshared:
|
.. _smtpshared:
|
||||||
|
|
||||||
|
@ -172,7 +172,7 @@ per test *module* (the default is to invoke once per test *function*).
|
||||||
Multiple test functions in a test module will thus
|
Multiple test functions in a test module will thus
|
||||||
each receive the same ``smtp`` fixture instance, thus saving time.
|
each receive the same ``smtp`` fixture instance, thus saving time.
|
||||||
|
|
||||||
The next example puts the fixture function into a separate ``conftest.py`` file
|
The next example puts the fixture function into a separate ``conftest.py`` file
|
||||||
so that tests from multiple test modules in the directory can
|
so that tests from multiple test modules in the directory can
|
||||||
access the fixture function::
|
access the fixture function::
|
||||||
|
|
||||||
|
@ -209,32 +209,32 @@ inspect what is going on and can now run the tests::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_module.py FF [100%]
|
test_module.py FF [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
________________________________ test_ehlo _________________________________
|
________________________________ test_ehlo _________________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_ehlo(smtp):
|
def test_ehlo(smtp):
|
||||||
response, msg = smtp.ehlo()
|
response, msg = smtp.ehlo()
|
||||||
assert response == 250
|
assert response == 250
|
||||||
assert b"smtp.gmail.com" in msg
|
assert b"smtp.gmail.com" in msg
|
||||||
> assert 0 # for demo purposes
|
> assert 0 # for demo purposes
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:6: AssertionError
|
test_module.py:6: AssertionError
|
||||||
________________________________ test_noop _________________________________
|
________________________________ test_noop _________________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_noop(smtp):
|
def test_noop(smtp):
|
||||||
response, msg = smtp.noop()
|
response, msg = smtp.noop()
|
||||||
assert response == 250
|
assert response == 250
|
||||||
> assert 0 # for demo purposes
|
> assert 0 # for demo purposes
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:11: AssertionError
|
test_module.py:11: AssertionError
|
||||||
========================= 2 failed in 0.12 seconds =========================
|
========================= 2 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
|
@ -331,7 +331,7 @@ Let's execute it::
|
||||||
|
|
||||||
$ pytest -s -q --tb=no
|
$ pytest -s -q --tb=no
|
||||||
FFteardown smtp
|
FFteardown smtp
|
||||||
|
|
||||||
2 failed in 0.12 seconds
|
2 failed in 0.12 seconds
|
||||||
|
|
||||||
We see that the ``smtp`` instance is finalized after the two
|
We see that the ``smtp`` instance is finalized after the two
|
||||||
|
@ -436,7 +436,7 @@ again, nothing much has changed::
|
||||||
|
|
||||||
$ pytest -s -q --tb=no
|
$ pytest -s -q --tb=no
|
||||||
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
|
FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
|
||||||
|
|
||||||
2 failed in 0.12 seconds
|
2 failed in 0.12 seconds
|
||||||
|
|
||||||
Let's quickly create another test module that actually sets the
|
Let's quickly create another test module that actually sets the
|
||||||
|
@ -504,51 +504,51 @@ So let's just do another run::
|
||||||
FFFF [100%]
|
FFFF [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
________________________ test_ehlo[smtp.gmail.com] _________________________
|
________________________ test_ehlo[smtp.gmail.com] _________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_ehlo(smtp):
|
def test_ehlo(smtp):
|
||||||
response, msg = smtp.ehlo()
|
response, msg = smtp.ehlo()
|
||||||
assert response == 250
|
assert response == 250
|
||||||
assert b"smtp.gmail.com" in msg
|
assert b"smtp.gmail.com" in msg
|
||||||
> assert 0 # for demo purposes
|
> assert 0 # for demo purposes
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:6: AssertionError
|
test_module.py:6: AssertionError
|
||||||
________________________ test_noop[smtp.gmail.com] _________________________
|
________________________ test_noop[smtp.gmail.com] _________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_noop(smtp):
|
def test_noop(smtp):
|
||||||
response, msg = smtp.noop()
|
response, msg = smtp.noop()
|
||||||
assert response == 250
|
assert response == 250
|
||||||
> assert 0 # for demo purposes
|
> assert 0 # for demo purposes
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:11: AssertionError
|
test_module.py:11: AssertionError
|
||||||
________________________ test_ehlo[mail.python.org] ________________________
|
________________________ test_ehlo[mail.python.org] ________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_ehlo(smtp):
|
def test_ehlo(smtp):
|
||||||
response, msg = smtp.ehlo()
|
response, msg = smtp.ehlo()
|
||||||
assert response == 250
|
assert response == 250
|
||||||
> assert b"smtp.gmail.com" in msg
|
> assert b"smtp.gmail.com" in msg
|
||||||
E AssertionError: assert b'smtp.gmail.com' in b'mail.python.org\nPIPELINING\nSIZE 51200000\nETRN\nSTARTTLS\nAUTH DIGEST-MD5 NTLM CRAM-MD5\nENHANCEDSTATUSCODES\n8BITMIME\nDSN\nSMTPUTF8'
|
E AssertionError: assert b'smtp.gmail.com' in b'mail.python.org\nPIPELINING\nSIZE 51200000\nETRN\nSTARTTLS\nAUTH DIGEST-MD5 NTLM CRAM-MD5\nENHANCEDSTATUSCODES\n8BITMIME\nDSN\nSMTPUTF8'
|
||||||
|
|
||||||
test_module.py:5: AssertionError
|
test_module.py:5: AssertionError
|
||||||
-------------------------- Captured stdout setup ---------------------------
|
-------------------------- Captured stdout setup ---------------------------
|
||||||
finalizing <smtplib.SMTP object at 0xdeadbeef>
|
finalizing <smtplib.SMTP object at 0xdeadbeef>
|
||||||
________________________ test_noop[mail.python.org] ________________________
|
________________________ test_noop[mail.python.org] ________________________
|
||||||
|
|
||||||
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
smtp = <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_noop(smtp):
|
def test_noop(smtp):
|
||||||
response, msg = smtp.noop()
|
response, msg = smtp.noop()
|
||||||
assert response == 250
|
assert response == 250
|
||||||
> assert 0 # for demo purposes
|
> assert 0 # for demo purposes
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_module.py:11: AssertionError
|
test_module.py:11: AssertionError
|
||||||
------------------------- Captured stdout teardown -------------------------
|
------------------------- Captured stdout teardown -------------------------
|
||||||
finalizing <smtplib.SMTP object at 0xdeadbeef>
|
finalizing <smtplib.SMTP object at 0xdeadbeef>
|
||||||
|
@ -620,7 +620,7 @@ Running the above tests results in the following test IDs being used::
|
||||||
<Function 'test_noop[smtp.gmail.com]'>
|
<Function 'test_noop[smtp.gmail.com]'>
|
||||||
<Function 'test_ehlo[mail.python.org]'>
|
<Function 'test_ehlo[mail.python.org]'>
|
||||||
<Function 'test_noop[mail.python.org]'>
|
<Function 'test_noop[mail.python.org]'>
|
||||||
|
|
||||||
======================= no tests ran in 0.12 seconds =======================
|
======================= no tests ran in 0.12 seconds =======================
|
||||||
|
|
||||||
.. _`fixture-parametrize-marks`:
|
.. _`fixture-parametrize-marks`:
|
||||||
|
@ -650,11 +650,11 @@ Running this test will *skip* the invocation of ``data_set`` with value ``2``::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 3 items
|
collecting ... collected 3 items
|
||||||
|
|
||||||
test_fixture_marks.py::test_data[0] PASSED [ 33%]
|
test_fixture_marks.py::test_data[0] PASSED [ 33%]
|
||||||
test_fixture_marks.py::test_data[1] PASSED [ 66%]
|
test_fixture_marks.py::test_data[1] PASSED [ 66%]
|
||||||
test_fixture_marks.py::test_data[2] SKIPPED [100%]
|
test_fixture_marks.py::test_data[2] SKIPPED [100%]
|
||||||
|
|
||||||
=================== 2 passed, 1 skipped in 0.12 seconds ====================
|
=================== 2 passed, 1 skipped in 0.12 seconds ====================
|
||||||
|
|
||||||
.. _`interdependent fixtures`:
|
.. _`interdependent fixtures`:
|
||||||
|
@ -693,10 +693,10 @@ Here we declare an ``app`` fixture which receives the previously defined
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 2 items
|
collecting ... collected 2 items
|
||||||
|
|
||||||
test_appsetup.py::test_smtp_exists[smtp.gmail.com] PASSED [ 50%]
|
test_appsetup.py::test_smtp_exists[smtp.gmail.com] PASSED [ 50%]
|
||||||
test_appsetup.py::test_smtp_exists[mail.python.org] PASSED [100%]
|
test_appsetup.py::test_smtp_exists[mail.python.org] PASSED [100%]
|
||||||
|
|
||||||
========================= 2 passed in 0.12 seconds =========================
|
========================= 2 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
Due to the parametrization of ``smtp`` the test will run twice with two
|
Due to the parametrization of ``smtp`` the test will run twice with two
|
||||||
|
@ -762,26 +762,26 @@ Let's run the tests in verbose mode and with looking at the print-output::
|
||||||
cachedir: .pytest_cache
|
cachedir: .pytest_cache
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collecting ... collected 8 items
|
collecting ... collected 8 items
|
||||||
|
|
||||||
test_module.py::test_0[1] SETUP otherarg 1
|
test_module.py::test_0[1] SETUP otherarg 1
|
||||||
RUN test0 with otherarg 1
|
RUN test0 with otherarg 1
|
||||||
PASSED TEARDOWN otherarg 1
|
PASSED TEARDOWN otherarg 1
|
||||||
|
|
||||||
test_module.py::test_0[2] SETUP otherarg 2
|
test_module.py::test_0[2] SETUP otherarg 2
|
||||||
RUN test0 with otherarg 2
|
RUN test0 with otherarg 2
|
||||||
PASSED TEARDOWN otherarg 2
|
PASSED TEARDOWN otherarg 2
|
||||||
|
|
||||||
test_module.py::test_1[mod1] SETUP modarg mod1
|
test_module.py::test_1[mod1] SETUP modarg mod1
|
||||||
RUN test1 with modarg mod1
|
RUN test1 with modarg mod1
|
||||||
PASSED
|
PASSED
|
||||||
test_module.py::test_2[mod1-1] SETUP otherarg 1
|
test_module.py::test_2[mod1-1] SETUP otherarg 1
|
||||||
RUN test2 with otherarg 1 and modarg mod1
|
RUN test2 with otherarg 1 and modarg mod1
|
||||||
PASSED TEARDOWN otherarg 1
|
PASSED TEARDOWN otherarg 1
|
||||||
|
|
||||||
test_module.py::test_2[mod1-2] SETUP otherarg 2
|
test_module.py::test_2[mod1-2] SETUP otherarg 2
|
||||||
RUN test2 with otherarg 2 and modarg mod1
|
RUN test2 with otherarg 2 and modarg mod1
|
||||||
PASSED TEARDOWN otherarg 2
|
PASSED TEARDOWN otherarg 2
|
||||||
|
|
||||||
test_module.py::test_1[mod2] TEARDOWN modarg mod1
|
test_module.py::test_1[mod2] TEARDOWN modarg mod1
|
||||||
SETUP modarg mod2
|
SETUP modarg mod2
|
||||||
RUN test1 with modarg mod2
|
RUN test1 with modarg mod2
|
||||||
|
@ -789,13 +789,13 @@ Let's run the tests in verbose mode and with looking at the print-output::
|
||||||
test_module.py::test_2[mod2-1] SETUP otherarg 1
|
test_module.py::test_2[mod2-1] SETUP otherarg 1
|
||||||
RUN test2 with otherarg 1 and modarg mod2
|
RUN test2 with otherarg 1 and modarg mod2
|
||||||
PASSED TEARDOWN otherarg 1
|
PASSED TEARDOWN otherarg 1
|
||||||
|
|
||||||
test_module.py::test_2[mod2-2] SETUP otherarg 2
|
test_module.py::test_2[mod2-2] SETUP otherarg 2
|
||||||
RUN test2 with otherarg 2 and modarg mod2
|
RUN test2 with otherarg 2 and modarg mod2
|
||||||
PASSED TEARDOWN otherarg 2
|
PASSED TEARDOWN otherarg 2
|
||||||
TEARDOWN modarg mod2
|
TEARDOWN modarg mod2
|
||||||
|
|
||||||
|
|
||||||
========================= 8 passed in 0.12 seconds =========================
|
========================= 8 passed in 0.12 seconds =========================
|
||||||
|
|
||||||
You can see that the parametrized module-scoped ``modarg`` resource caused an
|
You can see that the parametrized module-scoped ``modarg`` resource caused an
|
||||||
|
|
|
@ -5,9 +5,9 @@
|
||||||
pytest-2.3: reasoning for fixture/funcarg evolution
|
pytest-2.3: reasoning for fixture/funcarg evolution
|
||||||
=============================================================
|
=============================================================
|
||||||
|
|
||||||
**Target audience**: Reading this document requires basic knowledge of
|
**Target audience**: Reading this document requires basic knowledge of
|
||||||
python testing, xUnit setup methods and the (previous) basic pytest
|
python testing, xUnit setup methods and the (previous) basic pytest
|
||||||
funcarg mechanism, see http://pytest.org/2.2.4/funcargs.html
|
funcarg mechanism, see http://pytest.org/2.2.4/funcargs.html
|
||||||
If you are new to pytest, then you can simply ignore this
|
If you are new to pytest, then you can simply ignore this
|
||||||
section and read the other sections.
|
section and read the other sections.
|
||||||
|
|
||||||
|
@ -18,12 +18,12 @@ Shortcomings of the previous ``pytest_funcarg__`` mechanism
|
||||||
|
|
||||||
The pre pytest-2.3 funcarg mechanism calls a factory each time a
|
The pre pytest-2.3 funcarg mechanism calls a factory each time a
|
||||||
funcarg for a test function is required. If a factory wants to
|
funcarg for a test function is required. If a factory wants to
|
||||||
re-use a resource across different scopes, it often used
|
re-use a resource across different scopes, it often used
|
||||||
the ``request.cached_setup()`` helper to manage caching of
|
the ``request.cached_setup()`` helper to manage caching of
|
||||||
resources. Here is a basic example how we could implement
|
resources. Here is a basic example how we could implement
|
||||||
a per-session Database object::
|
a per-session Database object::
|
||||||
|
|
||||||
# content of conftest.py
|
# content of conftest.py
|
||||||
class Database(object):
|
class Database(object):
|
||||||
def __init__(self):
|
def __init__(self):
|
||||||
print ("database instance created")
|
print ("database instance created")
|
||||||
|
@ -31,7 +31,7 @@ a per-session Database object::
|
||||||
print ("database instance destroyed")
|
print ("database instance destroyed")
|
||||||
|
|
||||||
def pytest_funcarg__db(request):
|
def pytest_funcarg__db(request):
|
||||||
return request.cached_setup(setup=DataBase,
|
return request.cached_setup(setup=DataBase,
|
||||||
teardown=lambda db: db.destroy,
|
teardown=lambda db: db.destroy,
|
||||||
scope="session")
|
scope="session")
|
||||||
|
|
||||||
|
@ -40,13 +40,13 @@ There are several limitations and difficulties with this approach:
|
||||||
1. Scoping funcarg resource creation is not straight forward, instead one must
|
1. Scoping funcarg resource creation is not straight forward, instead one must
|
||||||
understand the intricate cached_setup() method mechanics.
|
understand the intricate cached_setup() method mechanics.
|
||||||
|
|
||||||
2. parametrizing the "db" resource is not straight forward:
|
2. parametrizing the "db" resource is not straight forward:
|
||||||
you need to apply a "parametrize" decorator or implement a
|
you need to apply a "parametrize" decorator or implement a
|
||||||
:py:func:`~hookspec.pytest_generate_tests` hook
|
:py:func:`~hookspec.pytest_generate_tests` hook
|
||||||
calling :py:func:`~python.Metafunc.parametrize` which
|
calling :py:func:`~python.Metafunc.parametrize` which
|
||||||
performs parametrization at the places where the resource
|
performs parametrization at the places where the resource
|
||||||
is used. Moreover, you need to modify the factory to use an
|
is used. Moreover, you need to modify the factory to use an
|
||||||
``extrakey`` parameter containing ``request.param`` to the
|
``extrakey`` parameter containing ``request.param`` to the
|
||||||
:py:func:`~python.Request.cached_setup` call.
|
:py:func:`~python.Request.cached_setup` call.
|
||||||
|
|
||||||
3. Multiple parametrized session-scoped resources will be active
|
3. Multiple parametrized session-scoped resources will be active
|
||||||
|
@ -56,7 +56,7 @@ There are several limitations and difficulties with this approach:
|
||||||
4. there is no way how you can make use of funcarg factories
|
4. there is no way how you can make use of funcarg factories
|
||||||
in xUnit setup methods.
|
in xUnit setup methods.
|
||||||
|
|
||||||
5. A non-parametrized fixture function cannot use a parametrized
|
5. A non-parametrized fixture function cannot use a parametrized
|
||||||
funcarg resource if it isn't stated in the test function signature.
|
funcarg resource if it isn't stated in the test function signature.
|
||||||
|
|
||||||
All of these limitations are addressed with pytest-2.3 and its
|
All of these limitations are addressed with pytest-2.3 and its
|
||||||
|
@ -72,18 +72,18 @@ the scope::
|
||||||
|
|
||||||
@pytest.fixture(scope="session")
|
@pytest.fixture(scope="session")
|
||||||
def db(request):
|
def db(request):
|
||||||
# factory will only be invoked once per session -
|
# factory will only be invoked once per session -
|
||||||
db = DataBase()
|
db = DataBase()
|
||||||
request.addfinalizer(db.destroy) # destroy when session is finished
|
request.addfinalizer(db.destroy) # destroy when session is finished
|
||||||
return db
|
return db
|
||||||
|
|
||||||
This factory implementation does not need to call ``cached_setup()`` anymore
|
This factory implementation does not need to call ``cached_setup()`` anymore
|
||||||
because it will only be invoked once per session. Moreover, the
|
because it will only be invoked once per session. Moreover, the
|
||||||
``request.addfinalizer()`` registers a finalizer according to the specified
|
``request.addfinalizer()`` registers a finalizer according to the specified
|
||||||
resource scope on which the factory function is operating.
|
resource scope on which the factory function is operating.
|
||||||
|
|
||||||
|
|
||||||
Direct parametrization of funcarg resource factories
|
Direct parametrization of funcarg resource factories
|
||||||
----------------------------------------------------------
|
----------------------------------------------------------
|
||||||
|
|
||||||
Previously, funcarg factories could not directly cause parametrization.
|
Previously, funcarg factories could not directly cause parametrization.
|
||||||
|
@ -96,9 +96,9 @@ sets. pytest-2.3 introduces a decorator for use on the factory itself::
|
||||||
def db(request):
|
def db(request):
|
||||||
... # use request.param
|
... # use request.param
|
||||||
|
|
||||||
Here the factory will be invoked twice (with the respective "mysql"
|
Here the factory will be invoked twice (with the respective "mysql"
|
||||||
and "pg" values set as ``request.param`` attributes) and all of
|
and "pg" values set as ``request.param`` attributes) and all of
|
||||||
the tests requiring "db" will run twice as well. The "mysql" and
|
the tests requiring "db" will run twice as well. The "mysql" and
|
||||||
"pg" values will also be used for reporting the test-invocation variants.
|
"pg" values will also be used for reporting the test-invocation variants.
|
||||||
|
|
||||||
This new way of parametrizing funcarg factories should in many cases
|
This new way of parametrizing funcarg factories should in many cases
|
||||||
|
@ -136,7 +136,7 @@ argument::
|
||||||
|
|
||||||
The name under which the funcarg resource can be requested is ``db``.
|
The name under which the funcarg resource can be requested is ``db``.
|
||||||
|
|
||||||
You can still use the "old" non-decorator way of specifying funcarg factories
|
You can still use the "old" non-decorator way of specifying funcarg factories
|
||||||
aka::
|
aka::
|
||||||
|
|
||||||
def pytest_funcarg__db(request):
|
def pytest_funcarg__db(request):
|
||||||
|
@ -156,10 +156,10 @@ several problems:
|
||||||
|
|
||||||
1. in distributed testing the master process would setup test resources
|
1. in distributed testing the master process would setup test resources
|
||||||
that are never needed because it only co-ordinates the test run
|
that are never needed because it only co-ordinates the test run
|
||||||
activities of the slave processes.
|
activities of the slave processes.
|
||||||
|
|
||||||
2. if you only perform a collection (with "--collect-only")
|
2. if you only perform a collection (with "--collect-only")
|
||||||
resource-setup will still be executed.
|
resource-setup will still be executed.
|
||||||
|
|
||||||
3. If a pytest_sessionstart is contained in some subdirectories
|
3. If a pytest_sessionstart is contained in some subdirectories
|
||||||
conftest.py file, it will not be called. This stems from the
|
conftest.py file, it will not be called. This stems from the
|
||||||
|
@ -194,17 +194,17 @@ overview of fixture management in your project.
|
||||||
Conclusion and compatibility notes
|
Conclusion and compatibility notes
|
||||||
---------------------------------------------------------
|
---------------------------------------------------------
|
||||||
|
|
||||||
**funcargs** were originally introduced to pytest-2.0. In pytest-2.3
|
**funcargs** were originally introduced to pytest-2.0. In pytest-2.3
|
||||||
the mechanism was extended and refined and is now described as
|
the mechanism was extended and refined and is now described as
|
||||||
fixtures:
|
fixtures:
|
||||||
|
|
||||||
* previously funcarg factories were specified with a special
|
* previously funcarg factories were specified with a special
|
||||||
``pytest_funcarg__NAME`` prefix instead of using the
|
``pytest_funcarg__NAME`` prefix instead of using the
|
||||||
``@pytest.fixture`` decorator.
|
``@pytest.fixture`` decorator.
|
||||||
|
|
||||||
* Factories received a ``request`` object which managed caching through
|
* Factories received a ``request`` object which managed caching through
|
||||||
``request.cached_setup()`` calls and allowed using other funcargs via
|
``request.cached_setup()`` calls and allowed using other funcargs via
|
||||||
``request.getfuncargvalue()`` calls. These intricate APIs made it hard
|
``request.getfuncargvalue()`` calls. These intricate APIs made it hard
|
||||||
to do proper parametrization and implement resource caching. The
|
to do proper parametrization and implement resource caching. The
|
||||||
new :py:func:`pytest.fixture` decorator allows to declare the scope
|
new :py:func:`pytest.fixture` decorator allows to declare the scope
|
||||||
and let pytest figure things out for you.
|
and let pytest figure things out for you.
|
||||||
|
@ -212,5 +212,5 @@ fixtures:
|
||||||
* if you used parametrization and funcarg factories which made use of
|
* if you used parametrization and funcarg factories which made use of
|
||||||
``request.cached_setup()`` it is recommended to invest a few minutes
|
``request.cached_setup()`` it is recommended to invest a few minutes
|
||||||
and simplify your fixture function code to use the :ref:`@pytest.fixture`
|
and simplify your fixture function code to use the :ref:`@pytest.fixture`
|
||||||
decorator instead. This will also allow to take advantage of
|
decorator instead. This will also allow to take advantage of
|
||||||
the automatic per-resource grouping of tests.
|
the automatic per-resource grouping of tests.
|
||||||
|
|
|
@ -1,6 +1,8 @@
|
||||||
|
from __future__ import print_function
|
||||||
import textwrap
|
import textwrap
|
||||||
import inspect
|
import inspect
|
||||||
|
|
||||||
|
|
||||||
class Writer(object):
|
class Writer(object):
|
||||||
def __init__(self, clsname):
|
def __init__(self, clsname):
|
||||||
self.clsname = clsname
|
self.clsname = clsname
|
||||||
|
@ -11,10 +13,10 @@ class Writer(object):
|
||||||
|
|
||||||
def __exit__(self, *args):
|
def __exit__(self, *args):
|
||||||
self.file.close()
|
self.file.close()
|
||||||
print "wrote", self.file.name
|
print("wrote", self.file.name)
|
||||||
|
|
||||||
def line(self, line):
|
def line(self, line):
|
||||||
self.file.write(line+"\n")
|
self.file.write(line + "\n")
|
||||||
|
|
||||||
def docmethod(self, method):
|
def docmethod(self, method):
|
||||||
doc = " ".join(method.__doc__.split())
|
doc = " ".join(method.__doc__.split())
|
||||||
|
@ -30,6 +32,7 @@ class Writer(object):
|
||||||
self.line(w.fill(doc))
|
self.line(w.fill(doc))
|
||||||
self.line("")
|
self.line("")
|
||||||
|
|
||||||
|
|
||||||
def pytest_funcarg__a(request):
|
def pytest_funcarg__a(request):
|
||||||
with Writer("request") as writer:
|
with Writer("request") as writer:
|
||||||
writer.docmethod(request.getfixturevalue)
|
writer.docmethod(request.getfixturevalue)
|
||||||
|
@ -37,5 +40,6 @@ def pytest_funcarg__a(request):
|
||||||
writer.docmethod(request.addfinalizer)
|
writer.docmethod(request.addfinalizer)
|
||||||
writer.docmethod(request.applymarker)
|
writer.docmethod(request.applymarker)
|
||||||
|
|
||||||
|
|
||||||
def test_hello(a):
|
def test_hello(a):
|
||||||
pass
|
pass
|
||||||
|
|
|
@ -50,17 +50,17 @@ That’s it. You can now execute the test function::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_sample.py F [100%]
|
test_sample.py F [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_answer ________________________________
|
_______________________________ test_answer ________________________________
|
||||||
|
|
||||||
def test_answer():
|
def test_answer():
|
||||||
> assert func(3) == 5
|
> assert func(3) == 5
|
||||||
E assert 4 == 5
|
E assert 4 == 5
|
||||||
E + where 4 = func(3)
|
E + where 4 = func(3)
|
||||||
|
|
||||||
test_sample.py:5: AssertionError
|
test_sample.py:5: AssertionError
|
||||||
========================= 1 failed in 0.12 seconds =========================
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
|
@ -117,15 +117,15 @@ Once you develop multiple tests, you may want to group them into a class. pytest
|
||||||
.F [100%]
|
.F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
____________________________ TestClass.test_two ____________________________
|
____________________________ TestClass.test_two ____________________________
|
||||||
|
|
||||||
self = <test_class.TestClass object at 0xdeadbeef>
|
self = <test_class.TestClass object at 0xdeadbeef>
|
||||||
|
|
||||||
def test_two(self):
|
def test_two(self):
|
||||||
x = "hello"
|
x = "hello"
|
||||||
> assert hasattr(x, 'check')
|
> assert hasattr(x, 'check')
|
||||||
E AssertionError: assert False
|
E AssertionError: assert False
|
||||||
E + where False = hasattr('hello', 'check')
|
E + where False = hasattr('hello', 'check')
|
||||||
|
|
||||||
test_class.py:8: AssertionError
|
test_class.py:8: AssertionError
|
||||||
1 failed, 1 passed in 0.12 seconds
|
1 failed, 1 passed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -147,14 +147,14 @@ List the name ``tmpdir`` in the test function signature and ``pytest`` will look
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_____________________________ test_needsfiles ______________________________
|
_____________________________ test_needsfiles ______________________________
|
||||||
|
|
||||||
tmpdir = local('PYTEST_TMPDIR/test_needsfiles0')
|
tmpdir = local('PYTEST_TMPDIR/test_needsfiles0')
|
||||||
|
|
||||||
def test_needsfiles(tmpdir):
|
def test_needsfiles(tmpdir):
|
||||||
print (tmpdir)
|
print (tmpdir)
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_tmpdir.py:3: AssertionError
|
test_tmpdir.py:3: AssertionError
|
||||||
--------------------------- Captured stdout call ---------------------------
|
--------------------------- Captured stdout call ---------------------------
|
||||||
PYTEST_TMPDIR/test_needsfiles0
|
PYTEST_TMPDIR/test_needsfiles0
|
||||||
|
|
|
@ -28,17 +28,17 @@ To execute it::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_sample.py F [100%]
|
test_sample.py F [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_______________________________ test_answer ________________________________
|
_______________________________ test_answer ________________________________
|
||||||
|
|
||||||
def test_answer():
|
def test_answer():
|
||||||
> assert inc(3) == 5
|
> assert inc(3) == 5
|
||||||
E assert 4 == 5
|
E assert 4 == 5
|
||||||
E + where 4 = inc(3)
|
E + where 4 = inc(3)
|
||||||
|
|
||||||
test_sample.py:5: AssertionError
|
test_sample.py:5: AssertionError
|
||||||
========================= 1 failed in 0.12 seconds =========================
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
|
|
|
@ -35,7 +35,7 @@ patch this function before calling into a function which uses it::
|
||||||
assert x == '/abc/.ssh'
|
assert x == '/abc/.ssh'
|
||||||
|
|
||||||
Here our test function monkeypatches ``os.path.expanduser`` and
|
Here our test function monkeypatches ``os.path.expanduser`` and
|
||||||
then calls into a function that calls it. After the test function
|
then calls into a function that calls it. After the test function
|
||||||
finishes the ``os.path.expanduser`` modification will be undone.
|
finishes the ``os.path.expanduser`` modification will be undone.
|
||||||
|
|
||||||
example: preventing "requests" from remote operations
|
example: preventing "requests" from remote operations
|
||||||
|
@ -51,15 +51,15 @@ requests in all your tests, you can do::
|
||||||
monkeypatch.delattr("requests.sessions.Session.request")
|
monkeypatch.delattr("requests.sessions.Session.request")
|
||||||
|
|
||||||
This autouse fixture will be executed for each test function and it
|
This autouse fixture will be executed for each test function and it
|
||||||
will delete the method ``request.session.Session.request``
|
will delete the method ``request.session.Session.request``
|
||||||
so that any attempts within tests to create http requests will fail.
|
so that any attempts within tests to create http requests will fail.
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Be advised that it is not recommended to patch builtin functions such as ``open``,
|
Be advised that it is not recommended to patch builtin functions such as ``open``,
|
||||||
``compile``, etc., because it might break pytest's internals. If that's
|
``compile``, etc., because it might break pytest's internals. If that's
|
||||||
unavoidable, passing ``--tb=native``, ``--assert=plain`` and ``--capture=no`` might
|
unavoidable, passing ``--tb=native``, ``--assert=plain`` and ``--capture=no`` might
|
||||||
help although there's no guarantee.
|
help although there's no guarantee.
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
@ -77,7 +77,7 @@ so that any attempts within tests to create http requests will fail.
|
||||||
assert functools.partial == 3
|
assert functools.partial == 3
|
||||||
|
|
||||||
See issue `#3290 <https://github.com/pytest-dev/pytest/issues/3290>`_ for details.
|
See issue `#3290 <https://github.com/pytest-dev/pytest/issues/3290>`_ for details.
|
||||||
|
|
||||||
|
|
||||||
.. currentmodule:: _pytest.monkeypatch
|
.. currentmodule:: _pytest.monkeypatch
|
||||||
|
|
||||||
|
|
|
@ -11,13 +11,13 @@ Parametrizing fixtures and test functions
|
||||||
|
|
||||||
pytest enables test parametrization at several levels:
|
pytest enables test parametrization at several levels:
|
||||||
|
|
||||||
- :py:func:`pytest.fixture` allows one to :ref:`parametrize fixture
|
- :py:func:`pytest.fixture` allows one to :ref:`parametrize fixture
|
||||||
functions <fixture-parametrize>`.
|
functions <fixture-parametrize>`.
|
||||||
|
|
||||||
* `@pytest.mark.parametrize`_ allows one to define multiple sets of
|
* `@pytest.mark.parametrize`_ allows one to define multiple sets of
|
||||||
arguments and fixtures at the test function or class.
|
arguments and fixtures at the test function or class.
|
||||||
|
|
||||||
* `pytest_generate_tests`_ allows one to define custom parametrization
|
* `pytest_generate_tests`_ allows one to define custom parametrization
|
||||||
schemes or extensions.
|
schemes or extensions.
|
||||||
|
|
||||||
.. _parametrizemark:
|
.. _parametrizemark:
|
||||||
|
@ -57,14 +57,14 @@ them in turn::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
|
||||||
test_expectation.py ..F [100%]
|
test_expectation.py ..F [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
____________________________ test_eval[6*9-42] _____________________________
|
____________________________ test_eval[6*9-42] _____________________________
|
||||||
|
|
||||||
test_input = '6*9', expected = 42
|
test_input = '6*9', expected = 42
|
||||||
|
|
||||||
@pytest.mark.parametrize("test_input,expected", [
|
@pytest.mark.parametrize("test_input,expected", [
|
||||||
("3+5", 8),
|
("3+5", 8),
|
||||||
("2+4", 6),
|
("2+4", 6),
|
||||||
|
@ -74,7 +74,7 @@ them in turn::
|
||||||
> assert eval(test_input) == expected
|
> assert eval(test_input) == expected
|
||||||
E AssertionError: assert 54 == 42
|
E AssertionError: assert 54 == 42
|
||||||
E + where 54 = eval('6*9')
|
E + where 54 = eval('6*9')
|
||||||
|
|
||||||
test_expectation.py:8: AssertionError
|
test_expectation.py:8: AssertionError
|
||||||
==================== 1 failed, 2 passed in 0.12 seconds ====================
|
==================== 1 failed, 2 passed in 0.12 seconds ====================
|
||||||
|
|
||||||
|
@ -106,9 +106,9 @@ Let's run this::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 3 items
|
collected 3 items
|
||||||
|
|
||||||
test_expectation.py ..x [100%]
|
test_expectation.py ..x [100%]
|
||||||
|
|
||||||
=================== 2 passed, 1 xfailed in 0.12 seconds ====================
|
=================== 2 passed, 1 xfailed in 0.12 seconds ====================
|
||||||
|
|
||||||
The one parameter set which caused a failure previously now
|
The one parameter set which caused a failure previously now
|
||||||
|
@ -123,7 +123,7 @@ To get all combinations of multiple parametrized arguments you can stack
|
||||||
def test_foo(x, y):
|
def test_foo(x, y):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
This will run the test with the arguments set to ``x=0/y=2``, ``x=1/y=2``,
|
This will run the test with the arguments set to ``x=0/y=2``, ``x=1/y=2``,
|
||||||
``x=0/y=3``, and ``x=1/y=3`` exhausting parameters in the order of the decorators.
|
``x=0/y=3``, and ``x=1/y=3`` exhausting parameters in the order of the decorators.
|
||||||
|
|
||||||
.. _`pytest_generate_tests`:
|
.. _`pytest_generate_tests`:
|
||||||
|
@ -174,15 +174,15 @@ Let's also run with a stringinput that will lead to a failing test::
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
___________________________ test_valid_string[!] ___________________________
|
___________________________ test_valid_string[!] ___________________________
|
||||||
|
|
||||||
stringinput = '!'
|
stringinput = '!'
|
||||||
|
|
||||||
def test_valid_string(stringinput):
|
def test_valid_string(stringinput):
|
||||||
> assert stringinput.isalpha()
|
> assert stringinput.isalpha()
|
||||||
E AssertionError: assert False
|
E AssertionError: assert False
|
||||||
E + where False = <built-in method isalpha of str object at 0xdeadbeef>()
|
E + where False = <built-in method isalpha of str object at 0xdeadbeef>()
|
||||||
E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha
|
E + where <built-in method isalpha of str object at 0xdeadbeef> = '!'.isalpha
|
||||||
|
|
||||||
test_strings.py:3: AssertionError
|
test_strings.py:3: AssertionError
|
||||||
1 failed in 0.12 seconds
|
1 failed in 0.12 seconds
|
||||||
|
|
||||||
|
@ -198,7 +198,7 @@ list::
|
||||||
SKIP [1] test_strings.py: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:1
|
SKIP [1] test_strings.py: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:1
|
||||||
1 skipped in 0.12 seconds
|
1 skipped in 0.12 seconds
|
||||||
|
|
||||||
Note that when calling ``metafunc.parametrize`` multiple times with different parameter sets, all parameter names across
|
Note that when calling ``metafunc.parametrize`` multiple times with different parameter sets, all parameter names across
|
||||||
those sets cannot be duplicated, otherwise an error will be raised.
|
those sets cannot be duplicated, otherwise an error will be raised.
|
||||||
|
|
||||||
More examples
|
More examples
|
||||||
|
|
|
@ -334,12 +334,12 @@ Running it with the report-on-xfail option gives this output::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR/example, inifile:
|
rootdir: $REGENDOC_TMPDIR/example, inifile:
|
||||||
collected 7 items
|
collected 7 items
|
||||||
|
|
||||||
xfail_demo.py xxxxxxx [100%]
|
xfail_demo.py xxxxxxx [100%]
|
||||||
========================= short test summary info ==========================
|
========================= short test summary info ==========================
|
||||||
XFAIL xfail_demo.py::test_hello
|
XFAIL xfail_demo.py::test_hello
|
||||||
XFAIL xfail_demo.py::test_hello2
|
XFAIL xfail_demo.py::test_hello2
|
||||||
reason: [NOTRUN]
|
reason: [NOTRUN]
|
||||||
XFAIL xfail_demo.py::test_hello3
|
XFAIL xfail_demo.py::test_hello3
|
||||||
condition: hasattr(os, 'sep')
|
condition: hasattr(os, 'sep')
|
||||||
XFAIL xfail_demo.py::test_hello4
|
XFAIL xfail_demo.py::test_hello4
|
||||||
|
@ -349,7 +349,7 @@ Running it with the report-on-xfail option gives this output::
|
||||||
XFAIL xfail_demo.py::test_hello6
|
XFAIL xfail_demo.py::test_hello6
|
||||||
reason: reason
|
reason: reason
|
||||||
XFAIL xfail_demo.py::test_hello7
|
XFAIL xfail_demo.py::test_hello7
|
||||||
|
|
||||||
======================== 7 xfailed in 0.12 seconds =========================
|
======================== 7 xfailed in 0.12 seconds =========================
|
||||||
|
|
||||||
.. _`skip/xfail with parametrize`:
|
.. _`skip/xfail with parametrize`:
|
||||||
|
|
|
@ -15,4 +15,3 @@ pageTracker._trackPageview();
|
||||||
} catch(err) {}</script>
|
} catch(err) {}</script>
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
|
|
||||||
|
|
|
@ -15,4 +15,3 @@ pageTracker._trackPageview();
|
||||||
} catch(err) {}</script>
|
} catch(err) {}</script>
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
|
|
||||||
|
|
|
@ -15,4 +15,3 @@ pageTracker._trackPageview();
|
||||||
} catch(err) {}</script>
|
} catch(err) {}</script>
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
|
|
||||||
|
|
|
@ -6,22 +6,22 @@ Write and report coverage data with the 'coverage' package.
|
||||||
.. contents::
|
.. contents::
|
||||||
:local:
|
:local:
|
||||||
|
|
||||||
Note: Original code by Ross Lawley.
|
Note: Original code by Ross Lawley.
|
||||||
|
|
||||||
Install
|
Install
|
||||||
--------------
|
--------------
|
||||||
|
|
||||||
Use pip to (un)install::
|
Use pip to (un)install::
|
||||||
|
|
||||||
pip install pytest-coverage
|
pip install pytest-coverage
|
||||||
pip uninstall pytest-coverage
|
pip uninstall pytest-coverage
|
||||||
|
|
||||||
or alternatively use easy_install to install::
|
or alternatively use easy_install to install::
|
||||||
|
|
||||||
easy_install pytest-coverage
|
easy_install pytest-coverage
|
||||||
|
|
||||||
|
|
||||||
Usage
|
Usage
|
||||||
-------------
|
-------------
|
||||||
|
|
||||||
To get full test coverage reports for a particular package type::
|
To get full test coverage reports for a particular package type::
|
||||||
|
|
|
@ -12,7 +12,7 @@ Install
|
||||||
To install the plugin issue::
|
To install the plugin issue::
|
||||||
|
|
||||||
easy_install pytest-figleaf # or
|
easy_install pytest-figleaf # or
|
||||||
pip install pytest-figleaf
|
pip install pytest-figleaf
|
||||||
|
|
||||||
and if you are using pip you can also uninstall::
|
and if you are using pip you can also uninstall::
|
||||||
|
|
||||||
|
|
|
@ -15,4 +15,3 @@ pageTracker._trackPageview();
|
||||||
} catch(err) {}</script>
|
} catch(err) {}</script>
|
||||||
</body>
|
</body>
|
||||||
</html>
|
</html>
|
||||||
|
|
||||||
|
|
|
@ -32,14 +32,14 @@ Running this would result in a passed test except for the last
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_tmpdir.py F [100%]
|
test_tmpdir.py F [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_____________________________ test_create_file _____________________________
|
_____________________________ test_create_file _____________________________
|
||||||
|
|
||||||
tmpdir = local('PYTEST_TMPDIR/test_create_file0')
|
tmpdir = local('PYTEST_TMPDIR/test_create_file0')
|
||||||
|
|
||||||
def test_create_file(tmpdir):
|
def test_create_file(tmpdir):
|
||||||
p = tmpdir.mkdir("sub").join("hello.txt")
|
p = tmpdir.mkdir("sub").join("hello.txt")
|
||||||
p.write("content")
|
p.write("content")
|
||||||
|
@ -47,7 +47,7 @@ Running this would result in a passed test except for the last
|
||||||
assert len(tmpdir.listdir()) == 1
|
assert len(tmpdir.listdir()) == 1
|
||||||
> assert 0
|
> assert 0
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_tmpdir.py:7: AssertionError
|
test_tmpdir.py:7: AssertionError
|
||||||
========================= 1 failed in 0.12 seconds =========================
|
========================= 1 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
|
|
|
@ -92,18 +92,18 @@ it from a unittest-style test::
|
||||||
def db_class(request):
|
def db_class(request):
|
||||||
class DummyDB(object):
|
class DummyDB(object):
|
||||||
pass
|
pass
|
||||||
# set a class attribute on the invoking test context
|
# set a class attribute on the invoking test context
|
||||||
request.cls.db = DummyDB()
|
request.cls.db = DummyDB()
|
||||||
|
|
||||||
This defines a fixture function ``db_class`` which - if used - is
|
This defines a fixture function ``db_class`` which - if used - is
|
||||||
called once for each test class and which sets the class-level
|
called once for each test class and which sets the class-level
|
||||||
``db`` attribute to a ``DummyDB`` instance. The fixture function
|
``db`` attribute to a ``DummyDB`` instance. The fixture function
|
||||||
achieves this by receiving a special ``request`` object which gives
|
achieves this by receiving a special ``request`` object which gives
|
||||||
access to :ref:`the requesting test context <request-context>` such
|
access to :ref:`the requesting test context <request-context>` such
|
||||||
as the ``cls`` attribute, denoting the class from which the fixture
|
as the ``cls`` attribute, denoting the class from which the fixture
|
||||||
is used. This architecture de-couples fixture writing from actual test
|
is used. This architecture de-couples fixture writing from actual test
|
||||||
code and allows re-use of the fixture by a minimal reference, the fixture
|
code and allows re-use of the fixture by a minimal reference, the fixture
|
||||||
name. So let's write an actual ``unittest.TestCase`` class using our
|
name. So let's write an actual ``unittest.TestCase`` class using our
|
||||||
fixture definition::
|
fixture definition::
|
||||||
|
|
||||||
# content of test_unittest_db.py
|
# content of test_unittest_db.py
|
||||||
|
@ -120,7 +120,7 @@ fixture definition::
|
||||||
def test_method2(self):
|
def test_method2(self):
|
||||||
assert 0, self.db # fail for demo purposes
|
assert 0, self.db # fail for demo purposes
|
||||||
|
|
||||||
The ``@pytest.mark.usefixtures("db_class")`` class-decorator makes sure that
|
The ``@pytest.mark.usefixtures("db_class")`` class-decorator makes sure that
|
||||||
the pytest fixture function ``db_class`` is called once per class.
|
the pytest fixture function ``db_class`` is called once per class.
|
||||||
Due to the deliberately failing assert statements, we can take a look at
|
Due to the deliberately failing assert statements, we can take a look at
|
||||||
the ``self.db`` values in the traceback::
|
the ``self.db`` values in the traceback::
|
||||||
|
@ -130,30 +130,30 @@ the ``self.db`` values in the traceback::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 2 items
|
collected 2 items
|
||||||
|
|
||||||
test_unittest_db.py FF [100%]
|
test_unittest_db.py FF [100%]
|
||||||
|
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
___________________________ MyTest.test_method1 ____________________________
|
___________________________ MyTest.test_method1 ____________________________
|
||||||
|
|
||||||
self = <test_unittest_db.MyTest testMethod=test_method1>
|
self = <test_unittest_db.MyTest testMethod=test_method1>
|
||||||
|
|
||||||
def test_method1(self):
|
def test_method1(self):
|
||||||
assert hasattr(self, "db")
|
assert hasattr(self, "db")
|
||||||
> assert 0, self.db # fail for demo purposes
|
> assert 0, self.db # fail for demo purposes
|
||||||
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef>
|
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef>
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_unittest_db.py:9: AssertionError
|
test_unittest_db.py:9: AssertionError
|
||||||
___________________________ MyTest.test_method2 ____________________________
|
___________________________ MyTest.test_method2 ____________________________
|
||||||
|
|
||||||
self = <test_unittest_db.MyTest testMethod=test_method2>
|
self = <test_unittest_db.MyTest testMethod=test_method2>
|
||||||
|
|
||||||
def test_method2(self):
|
def test_method2(self):
|
||||||
> assert 0, self.db # fail for demo purposes
|
> assert 0, self.db # fail for demo purposes
|
||||||
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef>
|
E AssertionError: <conftest.db_class.<locals>.DummyDB object at 0xdeadbeef>
|
||||||
E assert 0
|
E assert 0
|
||||||
|
|
||||||
test_unittest_db.py:12: AssertionError
|
test_unittest_db.py:12: AssertionError
|
||||||
========================= 2 failed in 0.12 seconds =========================
|
========================= 2 failed in 0.12 seconds =========================
|
||||||
|
|
||||||
|
@ -166,10 +166,10 @@ Using autouse fixtures and accessing other fixtures
|
||||||
---------------------------------------------------
|
---------------------------------------------------
|
||||||
|
|
||||||
Although it's usually better to explicitly declare use of fixtures you need
|
Although it's usually better to explicitly declare use of fixtures you need
|
||||||
for a given test, you may sometimes want to have fixtures that are
|
for a given test, you may sometimes want to have fixtures that are
|
||||||
automatically used in a given context. After all, the traditional
|
automatically used in a given context. After all, the traditional
|
||||||
style of unittest-setup mandates the use of this implicit fixture writing
|
style of unittest-setup mandates the use of this implicit fixture writing
|
||||||
and chances are, you are used to it or like it.
|
and chances are, you are used to it or like it.
|
||||||
|
|
||||||
You can flag fixture functions with ``@pytest.fixture(autouse=True)``
|
You can flag fixture functions with ``@pytest.fixture(autouse=True)``
|
||||||
and define the fixture function in the context where you want it used.
|
and define the fixture function in the context where you want it used.
|
||||||
|
|
|
@ -111,9 +111,9 @@ For more information see :ref:`marks <mark>`.
|
||||||
::
|
::
|
||||||
|
|
||||||
pytest --pyargs pkg.testing
|
pytest --pyargs pkg.testing
|
||||||
|
|
||||||
This will import ``pkg.testing`` and use its filesystem location to find and run tests from.
|
This will import ``pkg.testing`` and use its filesystem location to find and run tests from.
|
||||||
|
|
||||||
|
|
||||||
Modifying Python traceback printing
|
Modifying Python traceback printing
|
||||||
----------------------------------------------
|
----------------------------------------------
|
||||||
|
@ -195,7 +195,7 @@ in your code and pytest automatically disables its output capture for that test:
|
||||||
Using the builtin breakpoint function
|
Using the builtin breakpoint function
|
||||||
-------------------------------------
|
-------------------------------------
|
||||||
|
|
||||||
Python 3.7 introduces a builtin ``breakpoint()`` function.
|
Python 3.7 introduces a builtin ``breakpoint()`` function.
|
||||||
Pytest supports the use of ``breakpoint()`` with the following behaviours:
|
Pytest supports the use of ``breakpoint()`` with the following behaviours:
|
||||||
|
|
||||||
- When ``breakpoint()`` is called and ``PYTHONBREAKPOINT`` is set to the default value, pytest will use the custom internal PDB trace UI instead of the system default ``Pdb``.
|
- When ``breakpoint()`` is called and ``PYTHONBREAKPOINT`` is set to the default value, pytest will use the custom internal PDB trace UI instead of the system default ``Pdb``.
|
||||||
|
@ -496,7 +496,7 @@ hook was invoked::
|
||||||
|
|
||||||
$ python myinvoke.py
|
$ python myinvoke.py
|
||||||
. [100%]*** test run reporting finishing
|
. [100%]*** test run reporting finishing
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
|
|
|
@ -25,14 +25,14 @@ Running pytest now produces this output::
|
||||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||||
collected 1 item
|
collected 1 item
|
||||||
|
|
||||||
test_show_warnings.py . [100%]
|
test_show_warnings.py . [100%]
|
||||||
|
|
||||||
============================= warnings summary =============================
|
============================= warnings summary =============================
|
||||||
test_show_warnings.py::test_one
|
test_show_warnings.py::test_one
|
||||||
$REGENDOC_TMPDIR/test_show_warnings.py:4: UserWarning: api v1, should use functions from v2
|
$REGENDOC_TMPDIR/test_show_warnings.py:4: UserWarning: api v1, should use functions from v2
|
||||||
warnings.warn(UserWarning("api v1, should use functions from v2"))
|
warnings.warn(UserWarning("api v1, should use functions from v2"))
|
||||||
|
|
||||||
-- Docs: http://doc.pytest.org/en/latest/warnings.html
|
-- Docs: http://doc.pytest.org/en/latest/warnings.html
|
||||||
=================== 1 passed, 1 warnings in 0.12 seconds ===================
|
=================== 1 passed, 1 warnings in 0.12 seconds ===================
|
||||||
|
|
||||||
|
@ -45,17 +45,17 @@ them into errors::
|
||||||
F [100%]
|
F [100%]
|
||||||
================================= FAILURES =================================
|
================================= FAILURES =================================
|
||||||
_________________________________ test_one _________________________________
|
_________________________________ test_one _________________________________
|
||||||
|
|
||||||
def test_one():
|
def test_one():
|
||||||
> assert api_v1() == 1
|
> assert api_v1() == 1
|
||||||
|
|
||||||
test_show_warnings.py:8:
|
test_show_warnings.py:8:
|
||||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||||
|
|
||||||
def api_v1():
|
def api_v1():
|
||||||
> warnings.warn(UserWarning("api v1, should use functions from v2"))
|
> warnings.warn(UserWarning("api v1, should use functions from v2"))
|
||||||
E UserWarning: api v1, should use functions from v2
|
E UserWarning: api v1, should use functions from v2
|
||||||
|
|
||||||
test_show_warnings.py:4: UserWarning
|
test_show_warnings.py:4: UserWarning
|
||||||
1 failed in 0.12 seconds
|
1 failed in 0.12 seconds
|
||||||
|
|
||||||
|
|
|
@ -6,7 +6,7 @@ classic xunit-style setup
|
||||||
========================================
|
========================================
|
||||||
|
|
||||||
This section describes a classic and popular way how you can implement
|
This section describes a classic and popular way how you can implement
|
||||||
fixtures (setup and teardown test state) on a per-module/class/function basis.
|
fixtures (setup and teardown test state) on a per-module/class/function basis.
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
|
@ -6,7 +6,7 @@ pytest {version} has just been released to PyPI.
|
||||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||||
|
|
||||||
pip install --upgrade pytest
|
pip install --upgrade pytest
|
||||||
|
|
||||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||||
|
|
||||||
Thanks to all who contributed to this release, among them:
|
Thanks to all who contributed to this release, among them:
|
||||||
|
|
Loading…
Reference in New Issue