Compare commits
149 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a220a40350 | ||
|
|
dd6c534468 | ||
|
|
4a0aea2deb | ||
|
|
54cea3d178 | ||
|
|
9628c71210 | ||
|
|
a0ad9e31da | ||
|
|
22cff038f8 | ||
|
|
d26c1e3ad9 | ||
|
|
259b86b6ab | ||
|
|
f0f2d2b861 | ||
|
|
b671c5a8bf | ||
|
|
f320686fe0 | ||
|
|
742f9cb825 | ||
|
|
66fbebfc26 | ||
|
|
76f3be452a | ||
|
|
b2b1eb262f | ||
|
|
9fce430c89 | ||
|
|
e114feb458 | ||
|
|
c09f69df2a | ||
|
|
3900879a5c | ||
|
|
7b1cc55add | ||
|
|
d904981bf3 | ||
|
|
f13333afce | ||
|
|
fad1fbe381 | ||
|
|
b11640c1eb | ||
|
|
03829fde8a | ||
|
|
2e2f72156a | ||
|
|
22e9b006da | ||
|
|
802585cb66 | ||
|
|
cd747c48a4 | ||
|
|
26019b33f8 | ||
|
|
d00e2da6e9 | ||
|
|
2f993af54a | ||
|
|
af5e9238c8 | ||
|
|
27cea340f3 | ||
|
|
c3ba9225ef | ||
|
|
111d640bdb | ||
|
|
734c435d00 | ||
|
|
27bb2eceb4 | ||
|
|
383239cafc | ||
|
|
fd7bfa30d0 | ||
|
|
3427d27d5a | ||
|
|
0b540f98b1 | ||
|
|
bdab29fa3d | ||
|
|
5631a86296 | ||
|
|
52aadcd7c1 | ||
|
|
f5e72d2f5f | ||
|
|
a5ac19cc5e | ||
|
|
14e3a5fcb9 | ||
|
|
7b608f976d | ||
|
|
fe560b7192 | ||
|
|
b61cbc4fba | ||
|
|
a3ec3df0c8 | ||
|
|
e23af009f9 | ||
|
|
531e0dcaa3 | ||
|
|
dc5f33ba5c | ||
|
|
655ab0bf8b | ||
|
|
a7199fa8ab | ||
|
|
d714c196a5 | ||
|
|
ee7e1c94d2 | ||
|
|
de9d116a49 | ||
|
|
f003914d4b | ||
|
|
1e6dc6f8e5 | ||
|
|
c03612f729 | ||
|
|
29fa9d5bff | ||
|
|
3cdbb1854f | ||
|
|
4cb60dac3d | ||
|
|
8c7974af01 | ||
|
|
46cc9ab77c | ||
|
|
2d08005039 | ||
|
|
baadd569e8 | ||
|
|
11b391ff49 | ||
|
|
3676da594c | ||
|
|
a4fd5cdcb5 | ||
|
|
ae4e596b31 | ||
|
|
cfdebb3ba4 | ||
|
|
46e30435eb | ||
|
|
e86ba41a32 | ||
|
|
e89abe6a40 | ||
|
|
48b5c13f73 | ||
|
|
c24ffa3b4c | ||
|
|
761d552814 | ||
|
|
4bc6ecb8a5 | ||
|
|
9ee0a1f5c3 | ||
|
|
6b91bc88de | ||
|
|
6690b8a444 | ||
|
|
7093d8f65e | ||
|
|
794d4585d3 | ||
|
|
966391c77e | ||
|
|
9c8847a0cb | ||
|
|
58aaabbb10 | ||
|
|
2802135741 | ||
|
|
bf77daa2ee | ||
|
|
9933635cf7 | ||
|
|
ac5c5cc1ef | ||
|
|
810320f591 | ||
|
|
25d2acbdb2 | ||
|
|
52c134aed3 | ||
|
|
14b6380e5f | ||
|
|
70cdfaf661 | ||
|
|
abfd9774ef | ||
|
|
e57cc55719 | ||
|
|
696c702da7 | ||
|
|
bee2c864d8 | ||
|
|
e27a0d69aa | ||
|
|
15222ceca2 | ||
|
|
3c1ca03b9c | ||
|
|
25ed4edbc7 | ||
|
|
1e93089165 | ||
|
|
b2a8e06e4f | ||
|
|
09349c344e | ||
|
|
6cf515b164 | ||
|
|
c52f87ede3 | ||
|
|
549f5c1a47 | ||
|
|
3f8ff7f090 | ||
|
|
ad36407747 | ||
|
|
10d43bd3bf | ||
|
|
1fc185b640 | ||
|
|
181bd60bf9 | ||
|
|
3288c9a110 | ||
|
|
5e00549ecc | ||
|
|
b770a32dc8 | ||
|
|
f9157b1b6b | ||
|
|
f4e811afc0 | ||
|
|
59cdef92be | ||
|
|
709b8b65a4 | ||
|
|
0824076e11 | ||
|
|
67161ee9f8 | ||
|
|
1c891d7d97 | ||
|
|
12b1bff6c5 | ||
|
|
657976e98a | ||
|
|
a993add783 | ||
|
|
539523cfee | ||
|
|
f18780ed8a | ||
|
|
806d47b4d4 | ||
|
|
bfc9f61482 | ||
|
|
2a99d82c3b | ||
|
|
9b2753b302 | ||
|
|
e9bfccdf2d | ||
|
|
7b5d26c1a8 | ||
|
|
362b1b3c4f | ||
|
|
5c0c1977e3 | ||
|
|
39331856ed | ||
|
|
dc9154e8ff | ||
|
|
021fba4e84 | ||
|
|
fd84c886ee | ||
|
|
e6020781f6 | ||
|
|
acd3c4fbc4 | ||
|
|
c847b83d56 |
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -12,4 +12,4 @@ Here's a quick checklist that should be present in PRs:
|
||||
|
||||
Unless your change is a trivial or a documentation fix (e.g., a typo or reword of a small section) please:
|
||||
|
||||
- [ ] Add yourself to `AUTHORS`;
|
||||
- [ ] Add yourself to `AUTHORS`, in alphabetical order;
|
||||
|
||||
22
.travis.yml
22
.travis.yml
@@ -1,9 +1,10 @@
|
||||
sudo: false
|
||||
language: python
|
||||
python:
|
||||
- '3.5'
|
||||
- '3.6'
|
||||
# command to install dependencies
|
||||
install: "pip install -U tox"
|
||||
install:
|
||||
- pip install --upgrade --pre tox
|
||||
# # command to run tests
|
||||
env:
|
||||
matrix:
|
||||
@@ -13,18 +14,17 @@ env:
|
||||
- TOXENV=linting
|
||||
- TOXENV=py27
|
||||
- TOXENV=py34
|
||||
- TOXENV=py35
|
||||
- TOXENV=py36
|
||||
- TOXENV=py27-pexpect
|
||||
- TOXENV=py27-xdist
|
||||
- TOXENV=py27-trial
|
||||
- TOXENV=py27-numpy
|
||||
- TOXENV=py35-pexpect
|
||||
- TOXENV=py35-xdist
|
||||
- TOXENV=py35-trial
|
||||
- TOXENV=py35-numpy
|
||||
- TOXENV=py36-pexpect
|
||||
- TOXENV=py36-xdist
|
||||
- TOXENV=py36-trial
|
||||
- TOXENV=py36-numpy
|
||||
- TOXENV=py27-nobyte
|
||||
- TOXENV=doctesting
|
||||
- TOXENV=freeze
|
||||
- TOXENV=docs
|
||||
|
||||
matrix:
|
||||
@@ -35,8 +35,10 @@ matrix:
|
||||
python: '3.3'
|
||||
- env: TOXENV=pypy
|
||||
python: 'pypy-5.4'
|
||||
- env: TOXENV=py36
|
||||
python: '3.6'
|
||||
- env: TOXENV=py35
|
||||
python: '3.5'
|
||||
- env: TOXENV=py35-freeze
|
||||
python: '3.5'
|
||||
- env: TOXENV=py37
|
||||
python: 'nightly'
|
||||
allow_failures:
|
||||
|
||||
3
AUTHORS
3
AUTHORS
@@ -46,6 +46,7 @@ Dave Hunt
|
||||
David Díaz-Barquero
|
||||
David Mohr
|
||||
David Vierra
|
||||
Daw-Ran Liou
|
||||
Denis Kirisov
|
||||
Diego Russo
|
||||
Dmitry Dygalo
|
||||
@@ -164,6 +165,7 @@ Stephan Obermann
|
||||
Tareq Alayan
|
||||
Ted Xiao
|
||||
Thomas Grainger
|
||||
Tom Dalton
|
||||
Tom Viner
|
||||
Trevor Bekolay
|
||||
Tyler Goodlet
|
||||
@@ -173,5 +175,6 @@ Vidar T. Fauske
|
||||
Vitaly Lashmanov
|
||||
Vlad Dragos
|
||||
Wouter van Ackooy
|
||||
Xuan Luong
|
||||
Xuecong Liao
|
||||
Zoltán Máté
|
||||
|
||||
147
CHANGELOG.rst
147
CHANGELOG.rst
@@ -8,6 +8,153 @@
|
||||
|
||||
.. towncrier release notes start
|
||||
|
||||
Pytest 3.2.5 (2017-11-15)
|
||||
=========================
|
||||
|
||||
Bug Fixes
|
||||
---------
|
||||
|
||||
- Remove ``py<1.5`` restriction from ``pytest`` as this can cause version
|
||||
conflicts in some installations. (`#2926
|
||||
<https://github.com/pytest-dev/pytest/issues/2926>`_)
|
||||
|
||||
|
||||
Pytest 3.2.4 (2017-11-13)
|
||||
=========================
|
||||
|
||||
Bug Fixes
|
||||
---------
|
||||
|
||||
- Fix the bug where running with ``--pyargs`` will result in items with
|
||||
empty ``parent.nodeid`` if run from a different root directory. (`#2775
|
||||
<https://github.com/pytest-dev/pytest/issues/2775>`_)
|
||||
|
||||
- Fix issue with ``@pytest.parametrize`` if argnames was specified as keyword arguments.
|
||||
(`#2819 <https://github.com/pytest-dev/pytest/issues/2819>`_)
|
||||
|
||||
- Strip whitespace from marker names when reading them from INI config. (`#2856
|
||||
<https://github.com/pytest-dev/pytest/issues/2856>`_)
|
||||
|
||||
- Show full context of doctest source in the pytest output, if the line number of
|
||||
failed example in the docstring is < 9. (`#2882
|
||||
<https://github.com/pytest-dev/pytest/issues/2882>`_)
|
||||
|
||||
- Match fixture paths against actual path segments in order to avoid matching folders which share a prefix.
|
||||
(`#2836 <https://github.com/pytest-dev/pytest/issues/2836>`_)
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Introduce a dedicated section about conftest.py. (`#1505
|
||||
<https://github.com/pytest-dev/pytest/issues/1505>`_)
|
||||
|
||||
- Explicitly mention ``xpass`` in the documentation of ``xfail``. (`#1997
|
||||
<https://github.com/pytest-dev/pytest/issues/1997>`_)
|
||||
|
||||
- Append example for pytest.param in the example/parametrize document. (`#2658
|
||||
<https://github.com/pytest-dev/pytest/issues/2658>`_)
|
||||
|
||||
- Clarify language of proposal for fixtures parameters (`#2893
|
||||
<https://github.com/pytest-dev/pytest/issues/2893>`_)
|
||||
|
||||
- List python 3.6 in the documented supported versions in the getting started
|
||||
document. (`#2903 <https://github.com/pytest-dev/pytest/issues/2903>`_)
|
||||
|
||||
- Clarify the documentation of available fixture scopes. (`#538
|
||||
<https://github.com/pytest-dev/pytest/issues/538>`_)
|
||||
|
||||
- Add documentation about the ``python -m pytest`` invocation adding the
|
||||
current directory to sys.path. (`#911
|
||||
<https://github.com/pytest-dev/pytest/issues/911>`_)
|
||||
|
||||
|
||||
Pytest 3.2.3 (2017-10-03)
|
||||
=========================
|
||||
|
||||
Bug Fixes
|
||||
---------
|
||||
|
||||
- Fix crash in tab completion when no prefix is given. (`#2748
|
||||
<https://github.com/pytest-dev/pytest/issues/2748>`_)
|
||||
|
||||
- The equality checking function (``__eq__``) of ``MarkDecorator`` returns
|
||||
``False`` if one object is not an instance of ``MarkDecorator``. (`#2758
|
||||
<https://github.com/pytest-dev/pytest/issues/2758>`_)
|
||||
|
||||
- When running ``pytest --fixtures-per-test``: don't crash if an item has no
|
||||
_fixtureinfo attribute (e.g. doctests) (`#2788
|
||||
<https://github.com/pytest-dev/pytest/issues/2788>`_)
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- In help text of ``-k`` option, add example of using ``not`` to not select
|
||||
certain tests whose names match the provided expression. (`#1442
|
||||
<https://github.com/pytest-dev/pytest/issues/1442>`_)
|
||||
|
||||
- Add note in ``parametrize.rst`` about calling ``metafunc.parametrize``
|
||||
multiple times. (`#1548 <https://github.com/pytest-dev/pytest/issues/1548>`_)
|
||||
|
||||
|
||||
Trivial/Internal Changes
|
||||
------------------------
|
||||
|
||||
- Set ``xfail_strict=True`` in pytest's own test suite to catch expected
|
||||
failures as soon as they start to pass. (`#2722
|
||||
<https://github.com/pytest-dev/pytest/issues/2722>`_)
|
||||
|
||||
- Fix typo in example of passing a callable to markers (in example/markers.rst)
|
||||
(`#2765 <https://github.com/pytest-dev/pytest/issues/2765>`_)
|
||||
|
||||
|
||||
Pytest 3.2.2 (2017-09-06)
|
||||
=========================
|
||||
|
||||
Bug Fixes
|
||||
---------
|
||||
|
||||
- Calling the deprecated `request.getfuncargvalue()` now shows the source of
|
||||
the call. (`#2681 <https://github.com/pytest-dev/pytest/issues/2681>`_)
|
||||
|
||||
- Allow tests declared as ``@staticmethod`` to use fixtures. (`#2699
|
||||
<https://github.com/pytest-dev/pytest/issues/2699>`_)
|
||||
|
||||
- Fixed edge-case during collection: attributes which raised ``pytest.fail``
|
||||
when accessed would abort the entire collection. (`#2707
|
||||
<https://github.com/pytest-dev/pytest/issues/2707>`_)
|
||||
|
||||
- Fix ``ReprFuncArgs`` with mixed unicode and UTF-8 args. (`#2731
|
||||
<https://github.com/pytest-dev/pytest/issues/2731>`_)
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- In examples on working with custom markers, add examples demonstrating the
|
||||
usage of ``pytest.mark.MARKER_NAME.with_args`` in comparison with
|
||||
``pytest.mark.MARKER_NAME.__call__`` (`#2604
|
||||
<https://github.com/pytest-dev/pytest/issues/2604>`_)
|
||||
|
||||
- In one of the simple examples, use `pytest_collection_modifyitems()` to skip
|
||||
tests based on a command-line option, allowing its sharing while preventing a
|
||||
user error when acessing `pytest.config` before the argument parsing. (`#2653
|
||||
<https://github.com/pytest-dev/pytest/issues/2653>`_)
|
||||
|
||||
|
||||
Trivial/Internal Changes
|
||||
------------------------
|
||||
|
||||
- Fixed minor error in 'Good Practices/Manual Integration' code snippet.
|
||||
(`#2691 <https://github.com/pytest-dev/pytest/issues/2691>`_)
|
||||
|
||||
- Fixed typo in goodpractices.rst. (`#2721
|
||||
<https://github.com/pytest-dev/pytest/issues/2721>`_)
|
||||
|
||||
- Improve user guidance regarding ``--resultlog`` deprecation. (`#2739
|
||||
<https://github.com/pytest-dev/pytest/issues/2739>`_)
|
||||
|
||||
|
||||
Pytest 3.2.1 (2017-08-08)
|
||||
=========================
|
||||
|
||||
|
||||
@@ -120,7 +120,7 @@ the following:
|
||||
- PyPI presence with a ``setup.py`` that contains a license, ``pytest-``
|
||||
prefixed name, version number, authors, short and long description.
|
||||
|
||||
- a ``tox.ini`` for running tests using `tox <http://tox.testrun.org>`_.
|
||||
- a ``tox.ini`` for running tests using `tox <https://tox.readthedocs.io>`_.
|
||||
|
||||
- a ``README.txt`` describing how to use the plugin and on which
|
||||
platforms it runs.
|
||||
@@ -177,7 +177,8 @@ Short version
|
||||
#. Write a ``changelog`` entry: ``changelog/2574.bugfix``, use issue id number
|
||||
and one of ``bugfix``, ``removal``, ``feature``, ``vendor``, ``doc`` or
|
||||
``trivial`` for the issue type.
|
||||
#. Add yourself to ``AUTHORS`` file if not there yet, in alphabetical order.
|
||||
#. Unless your change is a trivial or a documentation fix (e.g., a typo or reword of a small section) please
|
||||
add yourself to the ``AUTHORS`` file, in alphabetical order;
|
||||
|
||||
|
||||
Long version
|
||||
|
||||
@@ -1,5 +1,9 @@
|
||||
How to release pytest
|
||||
--------------------------------------------
|
||||
Release Procedure
|
||||
-----------------
|
||||
|
||||
Our current policy for releasing is to aim for a bugfix every few weeks and a minor release every 2-3 months. The idea
|
||||
is to get fixes and new features out instead of trying to cram a ton of features into a release and by consequence
|
||||
taking a lot of time to make a new one.
|
||||
|
||||
.. important::
|
||||
|
||||
@@ -21,7 +25,7 @@ How to release pytest
|
||||
#. Generate docs, changelog, announcements and upload a package to
|
||||
your ``devpi`` staging server::
|
||||
|
||||
invoke generate.pre_release <VERSION> <DEVPI USER> --password <DEVPI PASSWORD>
|
||||
invoke generate.pre-release <VERSION> <DEVPI USER> --password <DEVPI PASSWORD>
|
||||
|
||||
If ``--password`` is not given, it is assumed the user is already logged in ``devpi``.
|
||||
If you don't have an account, please ask for one.
|
||||
@@ -49,7 +53,7 @@ How to release pytest
|
||||
|
||||
#. Publish to PyPI::
|
||||
|
||||
invoke generate.publish_release <VERSION> <DEVPI USER> <PYPI_NAME>
|
||||
invoke generate.publish-release <VERSION> <DEVPI USER> <PYPI_NAME>
|
||||
|
||||
where PYPI_NAME is the name of pypi.python.org as configured in your ``~/.pypirc``
|
||||
file `for devpi <http://doc.devpi.net/latest/quickstart-releaseprocess.html?highlight=pypirc#devpi-push-releasing-to-an-external-index>`_.
|
||||
|
||||
@@ -78,7 +78,7 @@ Features
|
||||
|
||||
- Python2.6+, Python3.3+, PyPy-2.3, Jython-2.5 (untested);
|
||||
|
||||
- Rich plugin architecture, with over 150+ `external plugins <http://docs.pytest.org/en/latest/plugins.html#installing-external-plugins-searching>`_ and thriving community;
|
||||
- Rich plugin architecture, with over 315+ `external plugins <http://plugincompat.herokuapp.com>`_ and thriving community;
|
||||
|
||||
|
||||
Documentation
|
||||
|
||||
@@ -78,7 +78,8 @@ class FastFilesCompleter:
|
||||
completion = []
|
||||
globbed = []
|
||||
if '*' not in prefix and '?' not in prefix:
|
||||
if prefix[-1] == os.path.sep: # we are on unix, otherwise no bash
|
||||
# we are on unix, otherwise no bash
|
||||
if not prefix or prefix[-1] == os.path.sep:
|
||||
globbed.extend(glob(prefix + '.*'))
|
||||
prefix += '*'
|
||||
globbed.extend(glob(prefix))
|
||||
@@ -98,7 +99,7 @@ if os.environ.get('_ARGCOMPLETE'):
|
||||
filescompleter = FastFilesCompleter()
|
||||
|
||||
def try_argcomplete(parser):
|
||||
argcomplete.autocomplete(parser)
|
||||
argcomplete.autocomplete(parser, always_complete_options=False)
|
||||
else:
|
||||
def try_argcomplete(parser):
|
||||
pass
|
||||
|
||||
@@ -250,7 +250,7 @@ class TracebackEntry(object):
|
||||
line = str(self.statement).lstrip()
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except:
|
||||
except: # noqa
|
||||
line = "???"
|
||||
return " File %r:%d in %s\n %s\n" % (fn, self.lineno + 1, name, line)
|
||||
|
||||
@@ -338,16 +338,16 @@ class Traceback(list):
|
||||
# XXX needs a test
|
||||
key = entry.frame.code.path, id(entry.frame.code.raw), entry.lineno
|
||||
# print "checking for recursion at", key
|
||||
l = cache.setdefault(key, [])
|
||||
if l:
|
||||
values = cache.setdefault(key, [])
|
||||
if values:
|
||||
f = entry.frame
|
||||
loc = f.f_locals
|
||||
for otherloc in l:
|
||||
for otherloc in values:
|
||||
if f.is_true(f.eval(co_equal,
|
||||
__recursioncache_locals_1=loc,
|
||||
__recursioncache_locals_2=otherloc)):
|
||||
return i
|
||||
l.append(entry.frame.f_locals)
|
||||
values.append(entry.frame.f_locals)
|
||||
return None
|
||||
|
||||
|
||||
@@ -478,12 +478,12 @@ class FormattedExcinfo(object):
|
||||
s = str(source.getstatement(len(source) - 1))
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except:
|
||||
except: # noqa
|
||||
try:
|
||||
s = str(source[-1])
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except:
|
||||
except: # noqa
|
||||
return 0
|
||||
return 4 + (len(s) - len(s.lstrip()))
|
||||
|
||||
@@ -863,7 +863,7 @@ class ReprFuncArgs(TerminalRepr):
|
||||
if self.args:
|
||||
linesofar = ""
|
||||
for name, value in self.args:
|
||||
ns = "%s = %s" % (name, value)
|
||||
ns = "%s = %s" % (safe_str(name), safe_str(value))
|
||||
if len(ns) + len(linesofar) + 2 > tw.fullwidth:
|
||||
if linesofar:
|
||||
tw.line(linesofar)
|
||||
|
||||
@@ -254,7 +254,7 @@ def findsource(obj):
|
||||
sourcelines, lineno = py.std.inspect.findsource(obj)
|
||||
except py.builtin._sysex:
|
||||
raise
|
||||
except:
|
||||
except: # noqa
|
||||
return None, -1
|
||||
source = Source()
|
||||
source.lines = [line.rstrip() for line in sourcelines]
|
||||
@@ -319,22 +319,22 @@ def get_statement_startend2(lineno, node):
|
||||
import ast
|
||||
# flatten all statements and except handlers into one lineno-list
|
||||
# AST's line numbers start indexing at 1
|
||||
l = []
|
||||
values = []
|
||||
for x in ast.walk(node):
|
||||
if isinstance(x, _ast.stmt) or isinstance(x, _ast.ExceptHandler):
|
||||
l.append(x.lineno - 1)
|
||||
values.append(x.lineno - 1)
|
||||
for name in "finalbody", "orelse":
|
||||
val = getattr(x, name, None)
|
||||
if val:
|
||||
# treat the finally/orelse part as its own statement
|
||||
l.append(val[0].lineno - 1 - 1)
|
||||
l.sort()
|
||||
insert_index = bisect_right(l, lineno)
|
||||
start = l[insert_index - 1]
|
||||
if insert_index >= len(l):
|
||||
values.append(val[0].lineno - 1 - 1)
|
||||
values.sort()
|
||||
insert_index = bisect_right(values, lineno)
|
||||
start = values[insert_index - 1]
|
||||
if insert_index >= len(values):
|
||||
end = None
|
||||
else:
|
||||
end = l[insert_index]
|
||||
end = values[insert_index]
|
||||
return start, end
|
||||
|
||||
|
||||
|
||||
@@ -210,7 +210,7 @@ class AssertionRewritingHook(object):
|
||||
mod.__cached__ = pyc
|
||||
mod.__loader__ = self
|
||||
py.builtin.exec_(co, mod.__dict__)
|
||||
except:
|
||||
except: # noqa
|
||||
if name in sys.modules:
|
||||
del sys.modules[name]
|
||||
raise
|
||||
@@ -595,23 +595,26 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
# docstrings and __future__ imports.
|
||||
aliases = [ast.alias(py.builtin.builtins.__name__, "@py_builtins"),
|
||||
ast.alias("_pytest.assertion.rewrite", "@pytest_ar")]
|
||||
expect_docstring = True
|
||||
doc = getattr(mod, "docstring", None)
|
||||
expect_docstring = doc is None
|
||||
if doc is not None and self.is_rewrite_disabled(doc):
|
||||
return
|
||||
pos = 0
|
||||
lineno = 0
|
||||
lineno = 1
|
||||
for item in mod.body:
|
||||
if (expect_docstring and isinstance(item, ast.Expr) and
|
||||
isinstance(item.value, ast.Str)):
|
||||
doc = item.value.s
|
||||
if "PYTEST_DONT_REWRITE" in doc:
|
||||
# The module has disabled assertion rewriting.
|
||||
if self.is_rewrite_disabled(doc):
|
||||
return
|
||||
lineno += len(doc) - 1
|
||||
expect_docstring = False
|
||||
elif (not isinstance(item, ast.ImportFrom) or item.level > 0 or
|
||||
item.module != "__future__"):
|
||||
lineno = item.lineno
|
||||
break
|
||||
pos += 1
|
||||
else:
|
||||
lineno = item.lineno
|
||||
imports = [ast.Import([alias], lineno=lineno, col_offset=0)
|
||||
for alias in aliases]
|
||||
mod.body[pos:pos] = imports
|
||||
@@ -637,6 +640,9 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
not isinstance(field, ast.expr)):
|
||||
nodes.append(field)
|
||||
|
||||
def is_rewrite_disabled(self, docstring):
|
||||
return "PYTEST_DONT_REWRITE" in docstring
|
||||
|
||||
def variable(self):
|
||||
"""Get a new variable."""
|
||||
# Use a character invalid in python identifiers to avoid clashing.
|
||||
|
||||
@@ -53,11 +53,11 @@ def _split_explanation(explanation):
|
||||
"""
|
||||
raw_lines = (explanation or u('')).split('\n')
|
||||
lines = [raw_lines[0]]
|
||||
for l in raw_lines[1:]:
|
||||
if l and l[0] in ['{', '}', '~', '>']:
|
||||
lines.append(l)
|
||||
for values in raw_lines[1:]:
|
||||
if values and values[0] in ['{', '}', '~', '>']:
|
||||
lines.append(values)
|
||||
else:
|
||||
lines[-1] += '\\n' + l
|
||||
lines[-1] += '\\n' + values
|
||||
return lines
|
||||
|
||||
|
||||
|
||||
@@ -11,6 +11,7 @@ import functools
|
||||
import py
|
||||
|
||||
import _pytest
|
||||
from _pytest.outcomes import TEST_OUTCOME
|
||||
|
||||
|
||||
try:
|
||||
@@ -82,7 +83,15 @@ def num_mock_patch_args(function):
|
||||
return len(patchings)
|
||||
|
||||
|
||||
def getfuncargnames(function, startindex=None):
|
||||
def getfuncargnames(function, startindex=None, cls=None):
|
||||
"""
|
||||
@RonnyPfannschmidt: This function should be refactored when we revisit fixtures. The
|
||||
fixture mechanism should ask the node for the fixture names, and not try to obtain
|
||||
directly from the function object well after collection has occurred.
|
||||
"""
|
||||
if startindex is None and cls is not None:
|
||||
is_staticmethod = isinstance(cls.__dict__.get(function.__name__, None), staticmethod)
|
||||
startindex = 0 if is_staticmethod else 1
|
||||
# XXX merge with main.py's varnames
|
||||
# assert not isclass(function)
|
||||
realfunction = function
|
||||
@@ -221,14 +230,16 @@ def getimfunc(func):
|
||||
|
||||
|
||||
def safe_getattr(object, name, default):
|
||||
""" Like getattr but return default upon any Exception.
|
||||
""" Like getattr but return default upon any Exception or any OutcomeException.
|
||||
|
||||
Attribute access can potentially fail for 'evil' Python objects.
|
||||
See issue #214.
|
||||
It catches OutcomeException because of #2490 (issue #580), new outcomes are derived from BaseException
|
||||
instead of Exception (for more details check #2707)
|
||||
"""
|
||||
try:
|
||||
return getattr(object, name, default)
|
||||
except Exception:
|
||||
except TEST_OUTCOME:
|
||||
return default
|
||||
|
||||
|
||||
|
||||
@@ -1170,10 +1170,10 @@ class Config(object):
|
||||
return []
|
||||
if type == "pathlist":
|
||||
dp = py.path.local(self.inicfg.config.path).dirpath()
|
||||
l = []
|
||||
values = []
|
||||
for relpath in shlex.split(value):
|
||||
l.append(dp.join(relpath, abs=True))
|
||||
return l
|
||||
values.append(dp.join(relpath, abs=True))
|
||||
return values
|
||||
elif type == "args":
|
||||
return shlex.split(value)
|
||||
elif type == "linelist":
|
||||
@@ -1190,13 +1190,13 @@ class Config(object):
|
||||
except KeyError:
|
||||
return None
|
||||
modpath = py.path.local(mod.__file__).dirpath()
|
||||
l = []
|
||||
values = []
|
||||
for relroot in relroots:
|
||||
if not isinstance(relroot, py.path.local):
|
||||
relroot = relroot.replace("/", py.path.local.sep)
|
||||
relroot = modpath.join(relroot, abs=True)
|
||||
l.append(relroot)
|
||||
return l
|
||||
values.append(relroot)
|
||||
return values
|
||||
|
||||
def _get_override_ini_value(self, name):
|
||||
value = None
|
||||
|
||||
@@ -26,7 +26,10 @@ SETUP_CFG_PYTEST = '[pytest] section in setup.cfg files is deprecated, use [tool
|
||||
|
||||
GETFUNCARGVALUE = "use of getfuncargvalue is deprecated, use getfixturevalue"
|
||||
|
||||
RESULT_LOG = '--result-log is deprecated and scheduled for removal in pytest 4.0'
|
||||
RESULT_LOG = (
|
||||
'--result-log is deprecated and scheduled for removal in pytest 4.0.\n'
|
||||
'See https://docs.pytest.org/en/latest/usage.html#creating-resultlog-format-files for more information.'
|
||||
)
|
||||
|
||||
MARK_INFO_ATTRIBUTE = RemovedInPytest4Warning(
|
||||
"MarkInfo objects are deprecated as they contain the merged marks"
|
||||
|
||||
@@ -120,7 +120,7 @@ class DoctestItem(pytest.Item):
|
||||
lines = ["%03d %s" % (i + test.lineno + 1, x)
|
||||
for (i, x) in enumerate(lines)]
|
||||
# trim docstring error lines to 10
|
||||
lines = lines[example.lineno - 9:example.lineno + 1]
|
||||
lines = lines[max(example.lineno - 9, 0):example.lineno + 1]
|
||||
else:
|
||||
lines = ['EXAMPLE LOCATION UNKNOWN, not showing all tests of that example']
|
||||
indent = '>>>'
|
||||
|
||||
@@ -1,13 +1,14 @@
|
||||
from __future__ import absolute_import, division, print_function
|
||||
import sys
|
||||
|
||||
from py._code.code import FormattedExcinfo
|
||||
|
||||
import py
|
||||
import warnings
|
||||
|
||||
import inspect
|
||||
import sys
|
||||
import warnings
|
||||
|
||||
import py
|
||||
from py._code.code import FormattedExcinfo
|
||||
|
||||
import _pytest
|
||||
from _pytest import nodes
|
||||
from _pytest._code.code import TerminalRepr
|
||||
from _pytest.compat import (
|
||||
NOTSET, exc_clear, _format_args,
|
||||
@@ -15,9 +16,10 @@ from _pytest.compat import (
|
||||
is_generator, isclass, getimfunc,
|
||||
getlocation, getfuncargnames,
|
||||
safe_getattr,
|
||||
FuncargnamesCompatAttr,
|
||||
)
|
||||
from _pytest.outcomes import fail, TEST_OUTCOME
|
||||
from _pytest.compat import FuncargnamesCompatAttr
|
||||
|
||||
|
||||
if sys.version_info[:2] == (2, 6):
|
||||
from ordereddict import OrderedDict
|
||||
@@ -432,7 +434,8 @@ class FixtureRequest(FuncargnamesCompatAttr):
|
||||
from _pytest import deprecated
|
||||
warnings.warn(
|
||||
deprecated.GETFUNCARGVALUE,
|
||||
DeprecationWarning)
|
||||
DeprecationWarning,
|
||||
stacklevel=2)
|
||||
return self.getfixturevalue(argname)
|
||||
|
||||
def _get_active_fixturedef(self, argname):
|
||||
@@ -457,13 +460,13 @@ class FixtureRequest(FuncargnamesCompatAttr):
|
||||
|
||||
def _get_fixturestack(self):
|
||||
current = self
|
||||
l = []
|
||||
values = []
|
||||
while 1:
|
||||
fixturedef = getattr(current, "_fixturedef", None)
|
||||
if fixturedef is None:
|
||||
l.reverse()
|
||||
return l
|
||||
l.append(fixturedef)
|
||||
values.reverse()
|
||||
return values
|
||||
values.append(fixturedef)
|
||||
current = current._parent_request
|
||||
|
||||
def _getfixturevalue(self, fixturedef):
|
||||
@@ -572,7 +575,6 @@ class SubRequest(FixtureRequest):
|
||||
self.param_index = param_index
|
||||
self.scope = scope
|
||||
self._fixturedef = fixturedef
|
||||
self.addfinalizer = fixturedef.addfinalizer
|
||||
self._pyfuncitem = request._pyfuncitem
|
||||
self._fixture_values = request._fixture_values
|
||||
self._fixture_defs = request._fixture_defs
|
||||
@@ -583,6 +585,9 @@ class SubRequest(FixtureRequest):
|
||||
def __repr__(self):
|
||||
return "<SubRequest %r for %r>" % (self.fixturename, self._pyfuncitem)
|
||||
|
||||
def addfinalizer(self, finalizer):
|
||||
self._fixturedef.addfinalizer(finalizer)
|
||||
|
||||
|
||||
class ScopeMismatchError(Exception):
|
||||
""" A fixture function tries to use a different fixture function which
|
||||
@@ -746,7 +751,7 @@ class FixtureDef:
|
||||
try:
|
||||
func = self._finalizer.pop()
|
||||
func()
|
||||
except:
|
||||
except: # noqa
|
||||
exceptions.append(sys.exc_info())
|
||||
if exceptions:
|
||||
e = exceptions[0]
|
||||
@@ -956,11 +961,7 @@ class FixtureManager:
|
||||
|
||||
def getfixtureinfo(self, node, func, cls, funcargs=True):
|
||||
if funcargs and not hasattr(node, "nofuncargs"):
|
||||
if cls is not None:
|
||||
startindex = 1
|
||||
else:
|
||||
startindex = None
|
||||
argnames = getfuncargnames(func, startindex)
|
||||
argnames = getfuncargnames(func, cls=cls)
|
||||
else:
|
||||
argnames = ()
|
||||
usefixtures = getattr(func, "usefixtures", None)
|
||||
@@ -984,8 +985,8 @@ class FixtureManager:
|
||||
# by their test id)
|
||||
if p.basename.startswith("conftest.py"):
|
||||
nodeid = p.dirpath().relto(self.config.rootdir)
|
||||
if p.sep != "/":
|
||||
nodeid = nodeid.replace(p.sep, "/")
|
||||
if p.sep != nodes.SEP:
|
||||
nodeid = nodeid.replace(p.sep, nodes.SEP)
|
||||
self.parsefactories(plugin, nodeid)
|
||||
|
||||
def _getautousenames(self, nodeid):
|
||||
@@ -1040,9 +1041,14 @@ class FixtureManager:
|
||||
if faclist:
|
||||
fixturedef = faclist[-1]
|
||||
if fixturedef.params is not None:
|
||||
func_params = getattr(getattr(metafunc.function, 'parametrize', None), 'args', [[None]])
|
||||
parametrize_func = getattr(metafunc.function, 'parametrize', None)
|
||||
func_params = getattr(parametrize_func, 'args', [[None]])
|
||||
func_kwargs = getattr(parametrize_func, 'kwargs', {})
|
||||
# skip directly parametrized arguments
|
||||
argnames = func_params[0]
|
||||
if "argnames" in func_kwargs:
|
||||
argnames = parametrize_func.kwargs["argnames"]
|
||||
else:
|
||||
argnames = func_params[0]
|
||||
if not isinstance(argnames, (tuple, list)):
|
||||
argnames = [x.strip() for x in argnames.split(",") if x.strip()]
|
||||
if argname not in func_params and argname not in argnames:
|
||||
@@ -1130,5 +1136,5 @@ class FixtureManager:
|
||||
|
||||
def _matchfactories(self, fixturedefs, nodeid):
|
||||
for fixturedef in fixturedefs:
|
||||
if nodeid.startswith(fixturedef.baseid):
|
||||
if nodes.ischildnode(fixturedef.baseid, nodeid):
|
||||
yield fixturedef
|
||||
|
||||
@@ -17,6 +17,7 @@ import re
|
||||
import sys
|
||||
import time
|
||||
import pytest
|
||||
from _pytest import nodes
|
||||
from _pytest.config import filename_arg
|
||||
|
||||
# Python 2.X and 3.X compatibility
|
||||
@@ -252,7 +253,7 @@ def mangle_test_address(address):
|
||||
except ValueError:
|
||||
pass
|
||||
# convert file path to dotted path
|
||||
names[0] = names[0].replace("/", '.')
|
||||
names[0] = names[0].replace(nodes.SEP, '.')
|
||||
names[0] = _py_ext_re.sub("", names[0])
|
||||
# put any params back
|
||||
names[-1] += possible_open_bracket + params
|
||||
|
||||
@@ -6,6 +6,7 @@ import os
|
||||
import sys
|
||||
|
||||
import _pytest
|
||||
from _pytest import nodes
|
||||
import _pytest._code
|
||||
import py
|
||||
try:
|
||||
@@ -14,8 +15,8 @@ except ImportError:
|
||||
from UserDict import DictMixin as MappingMixin
|
||||
|
||||
from _pytest.config import directory_arg, UsageError, hookimpl
|
||||
from _pytest.runner import collect_one_node
|
||||
from _pytest.outcomes import exit
|
||||
from _pytest.runner import collect_one_node
|
||||
|
||||
tracebackcutdir = py.path.local(_pytest.__file__).dirpath()
|
||||
|
||||
@@ -117,7 +118,7 @@ def wrap_session(config, doit):
|
||||
excinfo.typename, excinfo.value.msg))
|
||||
config.hook.pytest_keyboard_interrupt(excinfo=excinfo)
|
||||
session.exitstatus = EXIT_INTERRUPTED
|
||||
except:
|
||||
except: # noqa
|
||||
excinfo = _pytest._code.ExceptionInfo()
|
||||
config.notify_exception(excinfo, config.option)
|
||||
session.exitstatus = EXIT_INTERNALERROR
|
||||
@@ -374,7 +375,7 @@ class Node(object):
|
||||
res = function()
|
||||
except py.builtin._sysex:
|
||||
raise
|
||||
except:
|
||||
except: # noqa
|
||||
failure = sys.exc_info()
|
||||
setattr(self, exattrname, failure)
|
||||
raise
|
||||
@@ -516,14 +517,22 @@ class FSCollector(Collector):
|
||||
rel = fspath.relto(parent.fspath)
|
||||
if rel:
|
||||
name = rel
|
||||
name = name.replace(os.sep, "/")
|
||||
name = name.replace(os.sep, nodes.SEP)
|
||||
super(FSCollector, self).__init__(name, parent, config, session)
|
||||
self.fspath = fspath
|
||||
|
||||
def _check_initialpaths_for_relpath(self):
|
||||
for initialpath in self.session._initialpaths:
|
||||
if self.fspath.common(initialpath) == initialpath:
|
||||
return self.fspath.relto(initialpath.dirname)
|
||||
|
||||
def _makeid(self):
|
||||
relpath = self.fspath.relto(self.config.rootdir)
|
||||
if os.sep != "/":
|
||||
relpath = relpath.replace(os.sep, "/")
|
||||
|
||||
if not relpath:
|
||||
relpath = self._check_initialpaths_for_relpath()
|
||||
if os.sep != nodes.SEP:
|
||||
relpath = relpath.replace(os.sep, nodes.SEP)
|
||||
return relpath
|
||||
|
||||
|
||||
|
||||
@@ -91,7 +91,8 @@ def pytest_addoption(parser):
|
||||
"where all names are substring-matched against test names "
|
||||
"and their parent classes. Example: -k 'test_method or test_"
|
||||
"other' matches all test functions and classes whose name "
|
||||
"contains 'test_method' or 'test_other'. "
|
||||
"contains 'test_method' or 'test_other', while -k 'not test_method' "
|
||||
"matches those that don't contain 'test_method' in their names. "
|
||||
"Additionally keywords are matched to classes and functions "
|
||||
"containing extra names in their 'extra_keyword_matches' set, "
|
||||
"as well as functions which have names assigned directly to them."
|
||||
@@ -269,11 +270,12 @@ class MarkGenerator:
|
||||
return
|
||||
except AttributeError:
|
||||
pass
|
||||
self._markers = l = set()
|
||||
self._markers = values = set()
|
||||
for line in self._config.getini("markers"):
|
||||
beginning = line.split(":", 1)
|
||||
x = beginning[0].split("(", 1)[0]
|
||||
l.add(x)
|
||||
marker, _ = line.split(":", 1)
|
||||
marker = marker.rstrip()
|
||||
x = marker.split("(", 1)[0]
|
||||
values.add(x)
|
||||
if name not in self._markers:
|
||||
raise AttributeError("%r not a registered marker" % (name,))
|
||||
|
||||
@@ -330,7 +332,7 @@ class MarkDecorator:
|
||||
return self.name # for backward-compat (2.4.1 had this attr)
|
||||
|
||||
def __eq__(self, other):
|
||||
return self.mark == other.mark
|
||||
return self.mark == other.mark if isinstance(other, MarkDecorator) else False
|
||||
|
||||
def __repr__(self):
|
||||
return "<MarkDecorator %r>" % (self.mark,)
|
||||
@@ -382,7 +384,7 @@ def store_mark(obj, mark):
|
||||
"""
|
||||
assert isinstance(mark, Mark), mark
|
||||
# always reassign name to avoid updating pytestmark
|
||||
# in a referene that was only borrowed
|
||||
# in a reference that was only borrowed
|
||||
obj.pytestmark = get_unpacked_marks(obj) + [mark]
|
||||
|
||||
|
||||
|
||||
37
_pytest/nodes.py
Normal file
37
_pytest/nodes.py
Normal file
@@ -0,0 +1,37 @@
|
||||
SEP = "/"
|
||||
|
||||
|
||||
def _splitnode(nodeid):
|
||||
"""Split a nodeid into constituent 'parts'.
|
||||
|
||||
Node IDs are strings, and can be things like:
|
||||
''
|
||||
'testing/code'
|
||||
'testing/code/test_excinfo.py'
|
||||
'testing/code/test_excinfo.py::TestFormattedExcinfo::()'
|
||||
|
||||
Return values are lists e.g.
|
||||
[]
|
||||
['testing', 'code']
|
||||
['testing', 'code', 'test_excinfo.py']
|
||||
['testing', 'code', 'test_excinfo.py', 'TestFormattedExcinfo', '()']
|
||||
"""
|
||||
if nodeid == '':
|
||||
# If there is no root node at all, return an empty list so the caller's logic can remain sane
|
||||
return []
|
||||
parts = nodeid.split(SEP)
|
||||
# Replace single last element 'test_foo.py::Bar::()' with multiple elements 'test_foo.py', 'Bar', '()'
|
||||
parts[-1:] = parts[-1].split("::")
|
||||
return parts
|
||||
|
||||
|
||||
def ischildnode(baseid, nodeid):
|
||||
"""Return True if the nodeid is a child node of the baseid.
|
||||
|
||||
E.g. 'foo/bar::Baz::()' is a child of 'foo', 'foo/bar' and 'foo/bar::Baz', but not of 'foo/blorp'
|
||||
"""
|
||||
base_parts = _splitnode(baseid)
|
||||
node_parts = _splitnode(nodeid)
|
||||
if len(node_parts) < len(base_parts):
|
||||
return False
|
||||
return node_parts[:len(base_parts)] == base_parts
|
||||
@@ -182,9 +182,9 @@ class PytestArg:
|
||||
return hookrecorder
|
||||
|
||||
|
||||
def get_public_names(l):
|
||||
"""Only return names from iterator l without a leading underscore."""
|
||||
return [x for x in l if x[0] != "_"]
|
||||
def get_public_names(values):
|
||||
"""Only return names from iterator values without a leading underscore."""
|
||||
return [x for x in values if x[0] != "_"]
|
||||
|
||||
|
||||
class ParsedCall:
|
||||
@@ -258,9 +258,9 @@ class HookRecorder:
|
||||
pytest.fail("\n".join(lines))
|
||||
|
||||
def getcall(self, name):
|
||||
l = self.getcalls(name)
|
||||
assert len(l) == 1, (name, l)
|
||||
return l[0]
|
||||
values = self.getcalls(name)
|
||||
assert len(values) == 1, (name, values)
|
||||
return values[0]
|
||||
|
||||
# functionality for test reports
|
||||
|
||||
@@ -271,7 +271,7 @@ class HookRecorder:
|
||||
def matchreport(self, inamepart="",
|
||||
names="pytest_runtest_logreport pytest_collectreport", when=None):
|
||||
""" return a testreport whose dotted import path matches """
|
||||
l = []
|
||||
values = []
|
||||
for rep in self.getreports(names=names):
|
||||
try:
|
||||
if not when and rep.when != "call" and rep.passed:
|
||||
@@ -282,14 +282,14 @@ class HookRecorder:
|
||||
if when and getattr(rep, 'when', None) != when:
|
||||
continue
|
||||
if not inamepart or inamepart in rep.nodeid.split("::"):
|
||||
l.append(rep)
|
||||
if not l:
|
||||
values.append(rep)
|
||||
if not values:
|
||||
raise ValueError("could not find test report matching %r: "
|
||||
"no test reports at all!" % (inamepart,))
|
||||
if len(l) > 1:
|
||||
if len(values) > 1:
|
||||
raise ValueError(
|
||||
"found 2 or more testreports matching %r: %s" % (inamepart, l))
|
||||
return l[0]
|
||||
"found 2 or more testreports matching %r: %s" % (inamepart, values))
|
||||
return values[0]
|
||||
|
||||
def getfailures(self,
|
||||
names='pytest_runtest_logreport pytest_collectreport'):
|
||||
@@ -673,8 +673,8 @@ class Testdir:
|
||||
|
||||
"""
|
||||
p = self.makepyfile(source)
|
||||
l = list(cmdlineargs) + [p]
|
||||
return self.inline_run(*l)
|
||||
values = list(cmdlineargs) + [p]
|
||||
return self.inline_run(*values)
|
||||
|
||||
def inline_genitems(self, *args):
|
||||
"""Run ``pytest.main(['--collectonly'])`` in-process.
|
||||
|
||||
@@ -321,7 +321,7 @@ class PyCollector(PyobjMixin, main.Collector):
|
||||
for basecls in inspect.getmro(self.obj.__class__):
|
||||
dicts.append(basecls.__dict__)
|
||||
seen = {}
|
||||
l = []
|
||||
values = []
|
||||
for dic in dicts:
|
||||
for name, obj in list(dic.items()):
|
||||
if name in seen:
|
||||
@@ -332,9 +332,9 @@ class PyCollector(PyobjMixin, main.Collector):
|
||||
continue
|
||||
if not isinstance(res, list):
|
||||
res = [res]
|
||||
l.extend(res)
|
||||
l.sort(key=lambda item: item.reportinfo()[:2])
|
||||
return l
|
||||
values.extend(res)
|
||||
values.sort(key=lambda item: item.reportinfo()[:2])
|
||||
return values
|
||||
|
||||
def makeitem(self, name, obj):
|
||||
# assert self.ihook.fspath == self.fspath, self
|
||||
@@ -592,7 +592,7 @@ class Generator(FunctionMixin, PyCollector):
|
||||
self.session._setupstate.prepare(self)
|
||||
# see FunctionMixin.setup and test_setupstate_is_preserved_134
|
||||
self._preservedparent = self.parent.obj
|
||||
l = []
|
||||
values = []
|
||||
seen = {}
|
||||
for i, x in enumerate(self.obj()):
|
||||
name, call, args = self.getcallargs(x)
|
||||
@@ -605,9 +605,9 @@ class Generator(FunctionMixin, PyCollector):
|
||||
if name in seen:
|
||||
raise ValueError("%r generated tests with non-unique name %r" % (self, name))
|
||||
seen[name] = True
|
||||
l.append(self.Function(name, self, args=args, callobj=call))
|
||||
values.append(self.Function(name, self, args=args, callobj=call))
|
||||
self.warn('C1', deprecated.YIELD_TESTS)
|
||||
return l
|
||||
return values
|
||||
|
||||
def getcallargs(self, obj):
|
||||
if not isinstance(obj, (tuple, list)):
|
||||
@@ -979,50 +979,48 @@ def _show_fixtures_per_test(config, session):
|
||||
tw = _pytest.config.create_terminal_writer(config)
|
||||
verbose = config.getvalue("verbose")
|
||||
|
||||
def get_best_rel(func):
|
||||
def get_best_relpath(func):
|
||||
loc = getlocation(func, curdir)
|
||||
return curdir.bestrelpath(loc)
|
||||
|
||||
def write_fixture(fixture_def):
|
||||
argname = fixture_def.argname
|
||||
|
||||
if verbose <= 0 and argname.startswith("_"):
|
||||
return
|
||||
if verbose > 0:
|
||||
bestrel = get_best_rel(fixture_def.func)
|
||||
bestrel = get_best_relpath(fixture_def.func)
|
||||
funcargspec = "{0} -- {1}".format(argname, bestrel)
|
||||
else:
|
||||
funcargspec = argname
|
||||
tw.line(funcargspec, green=True)
|
||||
|
||||
fixture_doc = fixture_def.func.__doc__
|
||||
|
||||
if fixture_doc:
|
||||
write_docstring(tw, fixture_doc)
|
||||
else:
|
||||
tw.line(' no docstring available', red=True)
|
||||
|
||||
def write_item(item):
|
||||
name2fixturedefs = item._fixtureinfo.name2fixturedefs
|
||||
|
||||
if not name2fixturedefs:
|
||||
# The given test item does not use any fixtures
|
||||
try:
|
||||
info = item._fixtureinfo
|
||||
except AttributeError:
|
||||
# doctests items have no _fixtureinfo attribute
|
||||
return
|
||||
if not info.name2fixturedefs:
|
||||
# this test item does not use any fixtures
|
||||
return
|
||||
bestrel = get_best_rel(item.function)
|
||||
|
||||
tw.line()
|
||||
tw.sep('-', 'fixtures used by {0}'.format(item.name))
|
||||
tw.sep('-', '({0})'.format(bestrel))
|
||||
for argname, fixture_defs in sorted(name2fixturedefs.items()):
|
||||
assert fixture_defs is not None
|
||||
if not fixture_defs:
|
||||
tw.sep('-', '({0})'.format(get_best_relpath(item.function)))
|
||||
# dict key not used in loop but needed for sorting
|
||||
for _, fixturedefs in sorted(info.name2fixturedefs.items()):
|
||||
assert fixturedefs is not None
|
||||
if not fixturedefs:
|
||||
continue
|
||||
# The last fixture def item in the list is expected
|
||||
# to be the one used by the test item
|
||||
write_fixture(fixture_defs[-1])
|
||||
# last item is expected to be the one used by the test item
|
||||
write_fixture(fixturedefs[-1])
|
||||
|
||||
for item in session.items:
|
||||
write_item(item)
|
||||
for session_item in session.items:
|
||||
write_item(session_item)
|
||||
|
||||
|
||||
def showfixtures(config):
|
||||
|
||||
@@ -84,7 +84,7 @@ class ApproxNumpy(ApproxBase):
|
||||
|
||||
try:
|
||||
actual = np.asarray(actual)
|
||||
except:
|
||||
except: # noqa
|
||||
raise TypeError("cannot compare '{0}' to numpy.ndarray".format(actual))
|
||||
|
||||
if actual.shape != self.expected.shape:
|
||||
@@ -217,7 +217,8 @@ class ApproxScalar(ApproxBase):
|
||||
absolute tolerance or a relative tolerance, depending on what the user
|
||||
specified or which would be larger.
|
||||
"""
|
||||
def set_default(x, default): return x if x is not None else default
|
||||
def set_default(x, default):
|
||||
return x if x is not None else default
|
||||
|
||||
# Figure out what the absolute tolerance should be. ``self.abs`` is
|
||||
# either None or a value specified by the user.
|
||||
@@ -493,7 +494,8 @@ def raises(expected_exception, *args, **kwargs):
|
||||
...
|
||||
>>> assert exc_info.type == ValueError
|
||||
|
||||
Or you can use the keyword argument ``match`` to assert that the
|
||||
|
||||
Since version ``3.1`` you can use the keyword argument ``match`` to assert that the
|
||||
exception matches a text or regex::
|
||||
|
||||
>>> with raises(ValueError, match='must be 0 or None'):
|
||||
@@ -502,7 +504,12 @@ def raises(expected_exception, *args, **kwargs):
|
||||
>>> with raises(ValueError, match=r'must be \d+$'):
|
||||
... raise ValueError("value must be 42")
|
||||
|
||||
Or you can specify a callable by passing a to-be-called lambda::
|
||||
**Legacy forms**
|
||||
|
||||
The forms below are fully supported but are discouraged for new code because the
|
||||
context manager form is regarded as more readable and less error-prone.
|
||||
|
||||
It is possible to specify a callable by passing a to-be-called lambda::
|
||||
|
||||
>>> raises(ZeroDivisionError, lambda: 1/0)
|
||||
<ExceptionInfo ...>
|
||||
@@ -516,11 +523,14 @@ def raises(expected_exception, *args, **kwargs):
|
||||
>>> raises(ZeroDivisionError, f, x=0)
|
||||
<ExceptionInfo ...>
|
||||
|
||||
A third possibility is to use a string to be executed::
|
||||
It is also possible to pass a string to be evaluated at runtime::
|
||||
|
||||
>>> raises(ZeroDivisionError, "f(0)")
|
||||
<ExceptionInfo ...>
|
||||
|
||||
The string will be evaluated using the same ``locals()`` and ``globals()``
|
||||
at the moment of the ``raises`` call.
|
||||
|
||||
.. autoclass:: _pytest._code.ExceptionInfo
|
||||
:members:
|
||||
|
||||
|
||||
@@ -56,11 +56,6 @@ def pytest_sessionfinish(session):
|
||||
session._setupstate.teardown_all()
|
||||
|
||||
|
||||
class NodeInfo:
|
||||
def __init__(self, location):
|
||||
self.location = location
|
||||
|
||||
|
||||
def pytest_runtest_protocol(item, nextitem):
|
||||
item.ihook.pytest_runtest_logstart(
|
||||
nodeid=item.nodeid, location=item.location,
|
||||
@@ -197,7 +192,7 @@ class CallInfo:
|
||||
except KeyboardInterrupt:
|
||||
self.stop = time()
|
||||
raise
|
||||
except:
|
||||
except: # noqa
|
||||
self.excinfo = ExceptionInfo()
|
||||
self.stop = time()
|
||||
|
||||
|
||||
@@ -346,10 +346,10 @@ def folded_skips(skipped):
|
||||
key = event.longrepr
|
||||
assert len(key) == 3, (event, key)
|
||||
d.setdefault(key, []).append(event)
|
||||
l = []
|
||||
values = []
|
||||
for key, events in d.items():
|
||||
l.append((len(events),) + key)
|
||||
return l
|
||||
values.append((len(events),) + key)
|
||||
return values
|
||||
|
||||
|
||||
def show_skipped(terminalreporter, lines):
|
||||
|
||||
@@ -13,6 +13,7 @@ import sys
|
||||
import time
|
||||
import platform
|
||||
|
||||
from _pytest import nodes
|
||||
import _pytest._pluggy as pluggy
|
||||
|
||||
|
||||
@@ -444,15 +445,15 @@ class TerminalReporter:
|
||||
line = self.config.cwd_relative_nodeid(nodeid)
|
||||
if domain and line.endswith(domain):
|
||||
line = line[:-len(domain)]
|
||||
l = domain.split("[")
|
||||
l[0] = l[0].replace('.', '::') # don't replace '.' in params
|
||||
line += "[".join(l)
|
||||
values = domain.split("[")
|
||||
values[0] = values[0].replace('.', '::') # don't replace '.' in params
|
||||
line += "[".join(values)
|
||||
return line
|
||||
# collect_fspath comes from testid which has a "/"-normalized path
|
||||
|
||||
if fspath:
|
||||
res = mkrel(nodeid).replace("::()", "") # parens-normalization
|
||||
if nodeid.split("::")[0] != fspath.replace("\\", "/"):
|
||||
if nodeid.split("::")[0] != fspath.replace("\\", nodes.SEP):
|
||||
res += " <- " + self.startdir.bestrelpath(fspath)
|
||||
else:
|
||||
res = "[location]"
|
||||
@@ -478,11 +479,11 @@ class TerminalReporter:
|
||||
# summaries for sessionfinish
|
||||
#
|
||||
def getreports(self, name):
|
||||
l = []
|
||||
values = []
|
||||
for x in self.stats.get(name, []):
|
||||
if not hasattr(x, '_pdbshown'):
|
||||
l.append(x)
|
||||
return l
|
||||
values.append(x)
|
||||
return values
|
||||
|
||||
def summary_warnings(self):
|
||||
if self.hasopt("w"):
|
||||
@@ -593,8 +594,8 @@ def repr_pythonversion(v=None):
|
||||
return str(v)
|
||||
|
||||
|
||||
def flatten(l):
|
||||
for x in l:
|
||||
def flatten(values):
|
||||
for x in values:
|
||||
if isinstance(x, (list, tuple)):
|
||||
for y in flatten(x):
|
||||
yield y
|
||||
@@ -635,7 +636,7 @@ def build_summary_stats_line(stats):
|
||||
|
||||
|
||||
def _plugin_nameversions(plugininfo):
|
||||
l = []
|
||||
values = []
|
||||
for plugin, dist in plugininfo:
|
||||
# gets us name and version!
|
||||
name = '{dist.project_name}-{dist.version}'.format(dist=dist)
|
||||
@@ -644,6 +645,6 @@ def _plugin_nameversions(plugininfo):
|
||||
name = name[7:]
|
||||
# we decided to print python package names
|
||||
# they can have more than one plugin
|
||||
if name not in l:
|
||||
l.append(name)
|
||||
return l
|
||||
if name not in values:
|
||||
values.append(name)
|
||||
return values
|
||||
|
||||
@@ -109,13 +109,13 @@ class TestCaseFunction(Function):
|
||||
except TypeError:
|
||||
try:
|
||||
try:
|
||||
l = traceback.format_exception(*rawexcinfo)
|
||||
l.insert(0, "NOTE: Incompatible Exception Representation, "
|
||||
"displaying natively:\n\n")
|
||||
fail("".join(l), pytrace=False)
|
||||
values = traceback.format_exception(*rawexcinfo)
|
||||
values.insert(0, "NOTE: Incompatible Exception Representation, "
|
||||
"displaying natively:\n\n")
|
||||
fail("".join(values), pytrace=False)
|
||||
except (fail.Exception, KeyboardInterrupt):
|
||||
raise
|
||||
except:
|
||||
except: # noqa
|
||||
fail("ERROR: Unknown Incompatible Exception "
|
||||
"representation:\n%r" % (rawexcinfo,), pytrace=False)
|
||||
except KeyboardInterrupt:
|
||||
|
||||
12
appveyor.yml
12
appveyor.yml
@@ -21,13 +21,13 @@ environment:
|
||||
- TOXENV: "py27-xdist"
|
||||
- TOXENV: "py27-trial"
|
||||
- TOXENV: "py27-numpy"
|
||||
- TOXENV: "py35-pexpect"
|
||||
- TOXENV: "py35-xdist"
|
||||
- TOXENV: "py35-trial"
|
||||
- TOXENV: "py35-numpy"
|
||||
- TOXENV: "py36-pexpect"
|
||||
- TOXENV: "py36-xdist"
|
||||
- TOXENV: "py36-trial"
|
||||
- TOXENV: "py36-numpy"
|
||||
- TOXENV: "py27-nobyte"
|
||||
- TOXENV: "doctesting"
|
||||
- TOXENV: "freeze"
|
||||
- TOXENV: "py35-freeze"
|
||||
- TOXENV: "docs"
|
||||
|
||||
install:
|
||||
@@ -36,7 +36,7 @@ install:
|
||||
|
||||
- if "%TOXENV%" == "pypy" call scripts\install-pypy.bat
|
||||
|
||||
- C:\Python35\python -m pip install tox
|
||||
- C:\Python36\python -m pip install --upgrade --pre tox
|
||||
|
||||
build: false # Not a C# project, build stuff at the test step instead.
|
||||
|
||||
|
||||
@@ -6,6 +6,10 @@ Release announcements
|
||||
:maxdepth: 2
|
||||
|
||||
|
||||
release-3.2.5
|
||||
release-3.2.4
|
||||
release-3.2.3
|
||||
release-3.2.2
|
||||
release-3.2.1
|
||||
release-3.2.0
|
||||
release-3.1.3
|
||||
|
||||
28
doc/en/announce/release-3.2.2.rst
Normal file
28
doc/en/announce/release-3.2.2.rst
Normal file
@@ -0,0 +1,28 @@
|
||||
pytest-3.2.2
|
||||
=======================================
|
||||
|
||||
pytest 3.2.2 has just been released to PyPI.
|
||||
|
||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||
|
||||
pip install --upgrade pytest
|
||||
|
||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||
|
||||
Thanks to all who contributed to this release, among them:
|
||||
|
||||
* Andreas Pelme
|
||||
* Antonio Hidalgo
|
||||
* Bruno Oliveira
|
||||
* Felipe Dau
|
||||
* Fernando Macedo
|
||||
* Jesús Espino
|
||||
* Joan Massich
|
||||
* Joe Talbott
|
||||
* Kirill Pinchuk
|
||||
* Ronny Pfannschmidt
|
||||
* Xuan Luong
|
||||
|
||||
|
||||
Happy testing,
|
||||
The pytest Development Team
|
||||
23
doc/en/announce/release-3.2.3.rst
Normal file
23
doc/en/announce/release-3.2.3.rst
Normal file
@@ -0,0 +1,23 @@
|
||||
pytest-3.2.3
|
||||
=======================================
|
||||
|
||||
pytest 3.2.3 has just been released to PyPI.
|
||||
|
||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||
|
||||
pip install --upgrade pytest
|
||||
|
||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||
|
||||
Thanks to all who contributed to this release, among them:
|
||||
|
||||
* Bruno Oliveira
|
||||
* Evan
|
||||
* Joe Hamman
|
||||
* Oliver Bestwalter
|
||||
* Ronny Pfannschmidt
|
||||
* Xuan Luong
|
||||
|
||||
|
||||
Happy testing,
|
||||
The pytest Development Team
|
||||
36
doc/en/announce/release-3.2.4.rst
Normal file
36
doc/en/announce/release-3.2.4.rst
Normal file
@@ -0,0 +1,36 @@
|
||||
pytest-3.2.4
|
||||
=======================================
|
||||
|
||||
pytest 3.2.4 has just been released to PyPI.
|
||||
|
||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||
|
||||
pip install --upgrade pytest
|
||||
|
||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||
|
||||
Thanks to all who contributed to this release, among them:
|
||||
|
||||
* Bruno Oliveira
|
||||
* Christian Boelsen
|
||||
* Christoph Buchner
|
||||
* Daw-Ran Liou
|
||||
* Florian Bruhin
|
||||
* Franck Michea
|
||||
* Leonard Lausen
|
||||
* Matty G
|
||||
* Owen Tuz
|
||||
* Pavel Karateev
|
||||
* Pierre GIRAUD
|
||||
* Ronny Pfannschmidt
|
||||
* Stephen Finucane
|
||||
* Sviatoslav Abakumov
|
||||
* Thomas Hisch
|
||||
* Tom Dalton
|
||||
* Xuan Luong
|
||||
* Yorgos Pagles
|
||||
* Семён Марьясин
|
||||
|
||||
|
||||
Happy testing,
|
||||
The pytest Development Team
|
||||
18
doc/en/announce/release-3.2.5.rst
Normal file
18
doc/en/announce/release-3.2.5.rst
Normal file
@@ -0,0 +1,18 @@
|
||||
pytest-3.2.5
|
||||
=======================================
|
||||
|
||||
pytest 3.2.5 has just been released to PyPI.
|
||||
|
||||
This is a bug-fix release, being a drop-in replacement. To upgrade::
|
||||
|
||||
pip install --upgrade pytest
|
||||
|
||||
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
|
||||
|
||||
Thanks to all who contributed to this release, among them:
|
||||
|
||||
* Bruno Oliveira
|
||||
|
||||
|
||||
Happy testing,
|
||||
The pytest Development Team
|
||||
@@ -119,9 +119,9 @@ exceptions your own code is deliberately raising, whereas using
|
||||
like documenting unfixed bugs (where the test describes what "should" happen)
|
||||
or bugs in dependencies.
|
||||
|
||||
If you want to test that a regular expression matches on the string
|
||||
representation of an exception (like the ``TestCase.assertRaisesRegexp`` method
|
||||
from ``unittest``) you can use the ``ExceptionInfo.match`` method::
|
||||
Also, the context manager form accepts a ``match`` keyword parameter to test
|
||||
that a regular expression matches on the string representation of an exception
|
||||
(like the ``TestCase.assertRaisesRegexp`` method from ``unittest``)::
|
||||
|
||||
import pytest
|
||||
|
||||
@@ -129,12 +129,11 @@ from ``unittest``) you can use the ``ExceptionInfo.match`` method::
|
||||
raise ValueError("Exception 123 raised")
|
||||
|
||||
def test_match():
|
||||
with pytest.raises(ValueError) as excinfo:
|
||||
with pytest.raises(ValueError, match=r'.* 123 .*'):
|
||||
myfunc()
|
||||
excinfo.match(r'.* 123 .*')
|
||||
|
||||
The regexp parameter of the ``match`` method is matched with the ``re.search``
|
||||
function. So in the above example ``excinfo.match('123')`` would have worked as
|
||||
function. So in the above example ``match='123'`` would have worked as
|
||||
well.
|
||||
|
||||
|
||||
@@ -210,8 +209,8 @@ the ``pytest_assertrepr_compare`` hook.
|
||||
.. autofunction:: _pytest.hookspec.pytest_assertrepr_compare
|
||||
:noindex:
|
||||
|
||||
As an example consider adding the following hook in a conftest.py which
|
||||
provides an alternative explanation for ``Foo`` objects::
|
||||
As an example consider adding the following hook in a :ref:`conftest.py <conftest.py>`
|
||||
file which provides an alternative explanation for ``Foo`` objects::
|
||||
|
||||
# content of conftest.py
|
||||
from test_foocompare import Foo
|
||||
|
||||
@@ -41,6 +41,7 @@ Full pytest documentation
|
||||
historical-notes
|
||||
license
|
||||
contributing
|
||||
development_guide
|
||||
talks
|
||||
projects
|
||||
faq
|
||||
|
||||
@@ -230,13 +230,16 @@ Builtin configuration file options
|
||||
.. confval:: python_files
|
||||
|
||||
One or more Glob-style file patterns determining which python files
|
||||
are considered as test modules.
|
||||
are considered as test modules. By default, pytest will consider
|
||||
any file matching with ``test_*.py`` and ``*_test.py`` globs as a test
|
||||
module.
|
||||
|
||||
.. confval:: python_classes
|
||||
|
||||
One or more name prefixes or glob-style patterns determining which classes
|
||||
are considered for test collection. Here is an example of how to collect
|
||||
tests from classes that end in ``Suite``:
|
||||
are considered for test collection. By default, pytest will consider any
|
||||
class prefixed with ``Test`` as a test collection. Here is an example of how
|
||||
to collect tests from classes that end in ``Suite``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
@@ -251,7 +254,8 @@ Builtin configuration file options
|
||||
.. confval:: python_functions
|
||||
|
||||
One or more name prefixes or glob-patterns determining which test functions
|
||||
and methods are considered tests. Here is an example of how
|
||||
and methods are considered tests. By default, pytest will consider any
|
||||
function prefixed with ``test`` as a test. Here is an example of how
|
||||
to collect test functions and methods that end in ``_test``:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
108
doc/en/development_guide.rst
Normal file
108
doc/en/development_guide.rst
Normal file
@@ -0,0 +1,108 @@
|
||||
=================
|
||||
Development Guide
|
||||
=================
|
||||
|
||||
Some general guidelines regarding development in pytest for core maintainers and general contributors. Nothing here
|
||||
is set in stone and can't be changed, feel free to suggest improvements or changes in the workflow.
|
||||
|
||||
|
||||
Code Style
|
||||
----------
|
||||
|
||||
* `PEP-8 <https://www.python.org/dev/peps/pep-0008>`_
|
||||
* `flake8 <https://pypi.python.org/pypi/flake8>`_ for quality checks
|
||||
* `invoke <http://www.pyinvoke.org/>`_ to automate development tasks
|
||||
|
||||
|
||||
Branches
|
||||
--------
|
||||
|
||||
We have two long term branches:
|
||||
|
||||
* ``master``: contains the code for the next bugfix release.
|
||||
* ``features``: contains the code with new features for the next minor release.
|
||||
|
||||
The official repository usually does not contain topic branches, developers and contributors should create topic
|
||||
branches in their own forks.
|
||||
|
||||
Exceptions can be made for cases where more than one contributor is working on the same
|
||||
topic or where it makes sense to use some automatic capability of the main repository, such as automatic docs from
|
||||
`readthedocs <readthedocs.org>`_ for a branch dealing with documentation refactoring.
|
||||
|
||||
Issues
|
||||
------
|
||||
|
||||
Any question, feature, bug or proposal is welcome as an issue. Users are encouraged to use them whenever they need.
|
||||
|
||||
GitHub issues should use labels to categorize them. Labels should be created sporadically, to fill a niche; we should
|
||||
avoid creating labels just for the sake of creating them.
|
||||
|
||||
Here is a list of labels and a brief description mentioning their intent.
|
||||
|
||||
|
||||
**Type**
|
||||
|
||||
* ``type: backward compatibility``: issue that will cause problems with old pytest versions.
|
||||
* ``type: bug``: problem that needs to be addressed.
|
||||
* ``type: deprecation``: feature that will be deprecated in the future.
|
||||
* ``type: docs``: documentation missing or needing clarification.
|
||||
* ``type: enhancement``: new feature or API change, should be merged into ``features``.
|
||||
* ``type: feature-branch``: new feature or API change, should be merged into ``features``.
|
||||
* ``type: infrastructure``: improvement to development/releases/CI structure.
|
||||
* ``type: performance``: performance or memory problem/improvement.
|
||||
* ``type: proposal``: proposal for a new feature, often to gather opinions or design the API around the new feature.
|
||||
* ``type: question``: question regarding usage, installation, internals or how to test something.
|
||||
* ``type: refactoring``: internal improvements to the code.
|
||||
* ``type: regression``: indicates a problem that was introduced in a release which was working previously.
|
||||
|
||||
**Status**
|
||||
|
||||
* ``status: critical``: grave problem or usability issue that affects lots of users.
|
||||
* ``status: easy``: easy issue that is friendly to new contributors.
|
||||
* ``status: help wanted``: core developers need help from experts on this topic.
|
||||
* ``status: needs information``: reporter needs to provide more information; can be closed after 2 or more weeks of inactivity.
|
||||
|
||||
**Topic**
|
||||
|
||||
* ``topic: collection``
|
||||
* ``topic: fixtures``
|
||||
* ``topic: parametrize``
|
||||
* ``topic: reporting``
|
||||
* ``topic: selection``
|
||||
* ``topic: tracebacks``
|
||||
|
||||
**Plugin (internal or external)**
|
||||
|
||||
* ``plugin: cache``
|
||||
* ``plugin: capture``
|
||||
* ``plugin: doctests``
|
||||
* ``plugin: junitxml``
|
||||
* ``plugin: monkeypatch``
|
||||
* ``plugin: nose``
|
||||
* ``plugin: pastebin``
|
||||
* ``plugin: pytester``
|
||||
* ``plugin: tmpdir``
|
||||
* ``plugin: unittest``
|
||||
* ``plugin: warnings``
|
||||
* ``plugin: xdist``
|
||||
|
||||
|
||||
**OS**
|
||||
|
||||
Issues specific to a single operating system. Do not use as a means to indicate where an issue originated from, only
|
||||
for problems that happen **only** in that system.
|
||||
|
||||
* ``os: linux``
|
||||
* ``os: mac``
|
||||
* ``os: windows``
|
||||
|
||||
**Temporary**
|
||||
|
||||
Used to classify issues for limited time, to help find issues related in events for example.
|
||||
They should be removed after they are no longer relevant.
|
||||
|
||||
* ``temporary: EP2017 sprint``:
|
||||
* ``temporary: sprint-candidate``:
|
||||
|
||||
|
||||
.. include:: ../../HOWTORELEASE.rst
|
||||
@@ -395,6 +395,49 @@ The ``--markers`` option always gives you a list of available markers::
|
||||
@pytest.mark.trylast: mark a hook implementation function such that the plugin machinery will try to call it last/as late as possible.
|
||||
|
||||
|
||||
.. _`passing callables to custom markers`:
|
||||
|
||||
Passing a callable to custom markers
|
||||
--------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
Below is the config file that will be used in the next examples::
|
||||
|
||||
# content of conftest.py
|
||||
import sys
|
||||
|
||||
def pytest_runtest_setup(item):
|
||||
marker = item.get_marker('my_marker')
|
||||
if marker is not None:
|
||||
for info in marker:
|
||||
print('Marker info name={} args={} kwars={}'.format(info.name, info.args, info.kwargs))
|
||||
sys.stdout.flush()
|
||||
|
||||
A custom marker can have its argument set, i.e. ``args`` and ``kwargs`` properties, defined by either invoking it as a callable or using ``pytest.mark.MARKER_NAME.with_args``. These two methods achieve the same effect most of the time.
|
||||
|
||||
However, if there is a callable as the single positional argument with no keyword arguments, using the ``pytest.mark.MARKER_NAME(c)`` will not pass ``c`` as a positional argument but decorate ``c`` with the custom marker (see :ref:`MarkDecorator <mark>`). Fortunately, ``pytest.mark.MARKER_NAME.with_args`` comes to the rescue::
|
||||
|
||||
# content of test_custom_marker.py
|
||||
import pytest
|
||||
|
||||
def hello_world(*args, **kwargs):
|
||||
return 'Hello World'
|
||||
|
||||
@pytest.mark.my_marker.with_args(hello_world)
|
||||
def test_with_args():
|
||||
pass
|
||||
|
||||
The output is as follows::
|
||||
|
||||
$ pytest -q -s
|
||||
Marker info name=my_marker args=(<function hello_world at 0xdeadbeef>,) kwars={}
|
||||
.
|
||||
1 passed in 0.12 seconds
|
||||
|
||||
We can see that the custom marker has its argument set extended with the function ``hello_world``. This is the key difference between creating a custom marker as a callable, which invokes ``__call__`` behind the scenes, and using ``with_args``.
|
||||
|
||||
|
||||
Reading markers which were set from multiple places
|
||||
----------------------------------------------------
|
||||
|
||||
|
||||
@@ -350,7 +350,7 @@ Parametrizing test methods through per-class configuration
|
||||
.. _`unittest parametrizer`: https://github.com/testing-cabal/unittest-ext/blob/master/params.py
|
||||
|
||||
|
||||
Here is an example ``pytest_generate_function`` function implementing a
|
||||
Here is an example ``pytest_generate_tests`` function implementing a
|
||||
parametrization scheme similar to Michael Foord's `unittest
|
||||
parametrizer`_ but in a lot less code::
|
||||
|
||||
@@ -485,4 +485,54 @@ of our ``test_func1`` was skipped. A few notes:
|
||||
values as well.
|
||||
|
||||
|
||||
Set marks or test ID for individual parametrized test
|
||||
--------------------------------------------------------------------
|
||||
|
||||
Use ``pytest.param`` to apply marks or set test ID to individual parametrized test.
|
||||
For example::
|
||||
|
||||
# content of test_pytest_param_example.py
|
||||
import pytest
|
||||
@pytest.mark.parametrize('test_input,expected', [
|
||||
('3+5', 8),
|
||||
pytest.param('1+7', 8,
|
||||
marks=pytest.mark.basic),
|
||||
pytest.param('2+4', 6,
|
||||
marks=pytest.mark.basic,
|
||||
id='basic_2+4'),
|
||||
pytest.param('6*9', 42,
|
||||
marks=[pytest.mark.basic, pytest.mark.xfail],
|
||||
id='basic_6*9'),
|
||||
])
|
||||
def test_eval(test_input, expected):
|
||||
assert eval(test_input) == expected
|
||||
|
||||
In this example, we have 4 parametrized tests. Except for the first test,
|
||||
we mark the rest three parametrized tests with the custom marker ``basic``,
|
||||
and for the fourth test we also use the built-in mark ``xfail`` to indicate this
|
||||
test is expected to fail. For explicitness, we set test ids for some tests.
|
||||
|
||||
Then run ``pytest`` with verbose mode and with only the ``basic`` marker::
|
||||
|
||||
pytest -v -m basic
|
||||
============================================ test session starts =============================================
|
||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collected 4 items
|
||||
|
||||
test_pytest_param_example.py::test_eval[1+7-8] PASSED
|
||||
test_pytest_param_example.py::test_eval[basic_2+4] PASSED
|
||||
test_pytest_param_example.py::test_eval[basic_6*9] xfail
|
||||
========================================== short test summary info ===========================================
|
||||
XFAIL test_pytest_param_example.py::test_eval[basic_6*9]
|
||||
|
||||
============================================= 1 tests deselected =============================================
|
||||
|
||||
As the result:
|
||||
|
||||
- Four tests were collected
|
||||
- One test was deselected because it doesn't have the ``basic`` mark.
|
||||
- Three tests with the ``basic`` mark was selected.
|
||||
- The test ``test_eval[1+7-8]`` passed, but the name is autogenerated and confusing.
|
||||
- The test ``test_eval[basic_2+4]`` passed.
|
||||
- The test ``test_eval[basic_6*9]`` was expected to fail and did fail.
|
||||
|
||||
@@ -175,21 +175,23 @@ You can always peek at the collection tree without running tests like this::
|
||||
|
||||
======= no tests ran in 0.12 seconds ========
|
||||
|
||||
customizing test collection to find all .py files
|
||||
---------------------------------------------------------
|
||||
.. _customizing-test-collection:
|
||||
|
||||
Customizing test collection
|
||||
---------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
You can easily instruct ``pytest`` to discover tests from every python file::
|
||||
|
||||
You can easily instruct ``pytest`` to discover tests from every Python file::
|
||||
|
||||
# content of pytest.ini
|
||||
[pytest]
|
||||
python_files = *.py
|
||||
|
||||
However, many projects will have a ``setup.py`` which they don't want to be imported. Moreover, there may files only importable by a specific python version.
|
||||
For such cases you can dynamically define files to be ignored by listing
|
||||
them in a ``conftest.py`` file::
|
||||
However, many projects will have a ``setup.py`` which they don't want to be
|
||||
imported. Moreover, there may files only importable by a specific python
|
||||
version. For such cases you can dynamically define files to be ignored by
|
||||
listing them in a ``conftest.py`` file::
|
||||
|
||||
# content of conftest.py
|
||||
import sys
|
||||
@@ -198,7 +200,7 @@ them in a ``conftest.py`` file::
|
||||
if sys.version_info[0] > 2:
|
||||
collect_ignore.append("pkg/module_py2.py")
|
||||
|
||||
And then if you have a module file like this::
|
||||
and then if you have a module file like this::
|
||||
|
||||
# content of pkg/module_py2.py
|
||||
def test_only_on_python2():
|
||||
@@ -207,13 +209,13 @@ And then if you have a module file like this::
|
||||
except Exception, e:
|
||||
pass
|
||||
|
||||
and a setup.py dummy file like this::
|
||||
and a ``setup.py`` dummy file like this::
|
||||
|
||||
# content of setup.py
|
||||
0/0 # will raise exception if imported
|
||||
|
||||
then a pytest run on Python2 will find the one test and will leave out the
|
||||
setup.py file::
|
||||
If you run with a Python 2 interpreter then you will find the one test and will
|
||||
leave out the ``setup.py`` file::
|
||||
|
||||
#$ pytest --collect-only
|
||||
====== test session starts ======
|
||||
@@ -225,8 +227,8 @@ setup.py file::
|
||||
|
||||
====== no tests ran in 0.04 seconds ======
|
||||
|
||||
If you run with a Python3 interpreter both the one test and the setup.py file
|
||||
will be left out::
|
||||
If you run with a Python 3 interpreter both the one test and the ``setup.py``
|
||||
file will be left out::
|
||||
|
||||
$ pytest --collect-only
|
||||
======= test session starts ========
|
||||
|
||||
@@ -358,7 +358,7 @@ get on the terminal - we are working on that)::
|
||||
> int(s)
|
||||
E ValueError: invalid literal for int() with base 10: 'qwe'
|
||||
|
||||
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:570>:1: ValueError
|
||||
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:580>:1: ValueError
|
||||
_______ TestRaises.test_raises_doesnt ________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||
|
||||
@@ -127,7 +127,7 @@ Control skipping of tests according to command line option
|
||||
.. regendoc:wipe
|
||||
|
||||
Here is a ``conftest.py`` file adding a ``--runslow`` command
|
||||
line option to control skipping of ``slow`` marked tests:
|
||||
line option to control skipping of ``pytest.mark.slow`` marked tests:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@@ -136,7 +136,16 @@ line option to control skipping of ``slow`` marked tests:
|
||||
import pytest
|
||||
def pytest_addoption(parser):
|
||||
parser.addoption("--runslow", action="store_true",
|
||||
help="run slow tests")
|
||||
default=False, help="run slow tests")
|
||||
|
||||
def pytest_collection_modifyitems(config, items):
|
||||
if config.getoption("--runslow"):
|
||||
# --runslow given in cli: do not skip slow tests
|
||||
return
|
||||
skip_slow = pytest.mark.skip(reason="need --runslow option to run")
|
||||
for item in items:
|
||||
if "slow" in item.keywords:
|
||||
item.add_marker(skip_slow)
|
||||
|
||||
We can now write a test module like this:
|
||||
|
||||
@@ -146,17 +155,11 @@ We can now write a test module like this:
|
||||
import pytest
|
||||
|
||||
|
||||
slow = pytest.mark.skipif(
|
||||
not pytest.config.getoption("--runslow"),
|
||||
reason="need --runslow option to run"
|
||||
)
|
||||
|
||||
|
||||
def test_func_fast():
|
||||
pass
|
||||
|
||||
|
||||
@slow
|
||||
@pytest.mark.slow
|
||||
def test_func_slow():
|
||||
pass
|
||||
|
||||
@@ -170,7 +173,7 @@ and when running it will see a skipped "slow" test::
|
||||
|
||||
test_module.py .s
|
||||
======= short test summary info ========
|
||||
SKIP [1] test_module.py:14: need --runslow option to run
|
||||
SKIP [1] test_module.py:8: need --runslow option to run
|
||||
|
||||
======= 1 passed, 1 skipped in 0.12 seconds ========
|
||||
|
||||
|
||||
@@ -27,7 +27,7 @@ functions:
|
||||
* fixture management scales from simple unit to complex
|
||||
functional testing, allowing to parametrize fixtures and tests according
|
||||
to configuration and component options, or to re-use fixtures
|
||||
across class, module or whole test session scopes.
|
||||
across function, class, module or whole test session scopes.
|
||||
|
||||
In addition, pytest continues to support :ref:`xunitsetup`. You can mix
|
||||
both styles, moving incrementally from classic to new style, as you
|
||||
@@ -127,10 +127,39 @@ It's a prime example of `dependency injection`_ where fixture
|
||||
functions take the role of the *injector* and test functions are the
|
||||
*consumers* of fixture objects.
|
||||
|
||||
.. _`conftest.py`:
|
||||
.. _`conftest`:
|
||||
|
||||
``conftest.py``: sharing fixture functions
|
||||
------------------------------------------
|
||||
|
||||
If during implementing your tests you realize that you
|
||||
want to use a fixture function from multiple test files you can move it
|
||||
to a ``conftest.py`` file.
|
||||
You don't need to import the fixture you want to use in a test, it
|
||||
automatically gets discovered by pytest. The discovery of
|
||||
fixture functions starts at test classes, then test modules, then
|
||||
``conftest.py`` files and finally builtin and third party plugins.
|
||||
|
||||
You can also use the ``conftest.py`` file to implement
|
||||
:ref:`local per-directory plugins <conftest.py plugins>`.
|
||||
|
||||
Sharing test data
|
||||
-----------------
|
||||
|
||||
If you want to make test data from files available to your tests, a good way
|
||||
to do this is by loading these data in a fixture for use by your tests.
|
||||
This makes use of the automatic caching mechanisms of pytest.
|
||||
|
||||
Another good approach is by adding the data files in the ``tests`` folder.
|
||||
There are also community plugins available to help managing this aspect of
|
||||
testing, e.g. `pytest-datadir <https://github.com/gabrielcnr/pytest-datadir>`__
|
||||
and `pytest-datafiles <https://pypi.python.org/pypi/pytest-datafiles>`__.
|
||||
|
||||
.. _smtpshared:
|
||||
|
||||
Sharing a fixture across tests in a module (or class/session)
|
||||
-----------------------------------------------------------------
|
||||
Scope: sharing a fixture instance across tests in a class, module or session
|
||||
----------------------------------------------------------------------------
|
||||
|
||||
.. regendoc:wipe
|
||||
|
||||
@@ -139,10 +168,12 @@ usually time-expensive to create. Extending the previous example, we
|
||||
can add a ``scope='module'`` parameter to the
|
||||
:py:func:`@pytest.fixture <_pytest.python.fixture>` invocation
|
||||
to cause the decorated ``smtp`` fixture function to only be invoked once
|
||||
per test module. Multiple test functions in a test module will thus
|
||||
each receive the same ``smtp`` fixture instance. The next example puts
|
||||
the fixture function into a separate ``conftest.py`` file so
|
||||
that tests from multiple test modules in the directory can
|
||||
per test *module* (the default is to invoke once per test *function*).
|
||||
Multiple test functions in a test module will thus
|
||||
each receive the same ``smtp`` fixture instance, thus saving time.
|
||||
|
||||
The next example puts the fixture function into a separate ``conftest.py`` file
|
||||
so that tests from multiple test modules in the directory can
|
||||
access the fixture function::
|
||||
|
||||
# content of conftest.py
|
||||
@@ -223,6 +254,8 @@ instance, you can simply declare it:
|
||||
# the returned fixture value will be shared for
|
||||
# all tests needing it
|
||||
|
||||
Finally, the ``class`` scope will invoke the fixture once per test *class*.
|
||||
|
||||
.. _`finalization`:
|
||||
|
||||
Fixture finalization / executing teardown code
|
||||
@@ -858,7 +891,7 @@ into a conftest.py file **without** using ``autouse``::
|
||||
|
||||
# content of conftest.py
|
||||
@pytest.fixture
|
||||
def transact(self, request, db):
|
||||
def transact(request, db):
|
||||
db.begin()
|
||||
yield
|
||||
db.rollback()
|
||||
@@ -874,17 +907,6 @@ All test methods in this TestClass will use the transaction fixture while
|
||||
other test classes or functions in the module will not use it unless
|
||||
they also add a ``transact`` reference.
|
||||
|
||||
|
||||
Shifting (visibility of) fixture functions
|
||||
----------------------------------------------------
|
||||
|
||||
If during implementing your tests you realize that you
|
||||
want to use a fixture function from multiple test files you can move it
|
||||
to a :ref:`conftest.py <conftest.py>` file or even separately installable
|
||||
:ref:`plugins <plugins>` without changing test code. The discovery of
|
||||
fixtures functions starts at test classes, then test modules, then
|
||||
``conftest.py`` files and finally builtin and third party plugins.
|
||||
|
||||
Overriding fixtures on various levels
|
||||
-------------------------------------
|
||||
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
Installation and Getting Started
|
||||
===================================
|
||||
|
||||
**Pythons**: Python 2.6,2.7,3.3,3.4,3.5, Jython, PyPy-2.3
|
||||
**Pythons**: Python 2.6,2.7,3.3,3.4,3.5,3.6 Jython, PyPy-2.3
|
||||
|
||||
**Platforms**: Unix/Posix and Windows
|
||||
|
||||
|
||||
@@ -122,7 +122,7 @@ want to distribute them along with your application::
|
||||
test_view.py
|
||||
...
|
||||
|
||||
In this scheme, it is easy to your run tests using the ``--pyargs`` option::
|
||||
In this scheme, it is easy to run your tests using the ``--pyargs`` option::
|
||||
|
||||
pytest --pyargs mypkg
|
||||
|
||||
@@ -267,7 +267,7 @@ your own setuptools Test command for invoking pytest.
|
||||
|
||||
def initialize_options(self):
|
||||
TestCommand.initialize_options(self)
|
||||
self.pytest_args = []
|
||||
self.pytest_args = ''
|
||||
|
||||
def run_tests(self):
|
||||
import shlex
|
||||
|
||||
@@ -59,7 +59,7 @@ Features
|
||||
|
||||
- Python2.6+, Python3.3+, PyPy-2.3, Jython-2.5 (untested);
|
||||
|
||||
- Rich plugin architecture, with over 150+ :ref:`external plugins <extplugins>` and thriving community;
|
||||
- Rich plugin architecture, with over 315+ `external plugins <http://plugincompat.herokuapp.com>`_ and thriving community;
|
||||
|
||||
|
||||
Documentation
|
||||
|
||||
@@ -198,6 +198,12 @@ list::
|
||||
SKIP [1] test_strings.py:2: got empty parameter set ['stringinput'], function test_valid_string at $REGENDOC_TMPDIR/test_strings.py:1
|
||||
1 skipped in 0.12 seconds
|
||||
|
||||
Note that when calling ``metafunc.parametrize`` multiple times with different parameter sets, all parameter names across
|
||||
those sets cannot be duplicated, otherwise an error will be raised.
|
||||
|
||||
More examples
|
||||
-------------
|
||||
|
||||
For further examples, you might want to look at :ref:`more
|
||||
parametrization examples <paramexamples>`.
|
||||
|
||||
|
||||
@@ -94,7 +94,7 @@ environment you can type::
|
||||
|
||||
and will get an extended test header which shows activated plugins
|
||||
and their names. It will also print local plugins aka
|
||||
:ref:`conftest.py <conftest>` files when they are loaded.
|
||||
:ref:`conftest.py <conftest.py plugins>` files when they are loaded.
|
||||
|
||||
.. _`cmdunregister`:
|
||||
|
||||
@@ -155,4 +155,3 @@ in the `pytest repository <https://github.com/pytest-dev/pytest>`_.
|
||||
_pytest.terminal
|
||||
_pytest.tmpdir
|
||||
_pytest.unittest
|
||||
|
||||
|
||||
@@ -1,8 +1,13 @@
|
||||
:orphan:
|
||||
|
||||
=========================
|
||||
Parametrize with fixtures
|
||||
=========================
|
||||
===================================
|
||||
PROPOSAL: Parametrize with fixtures
|
||||
===================================
|
||||
|
||||
.. warning::
|
||||
|
||||
This document outlines a proposal around using fixtures as input
|
||||
of parametrized tests or fixtures.
|
||||
|
||||
Problem
|
||||
-------
|
||||
@@ -108,8 +113,13 @@ the following values.
|
||||
Alternative approach
|
||||
--------------------
|
||||
|
||||
A new helper function named ``fixture_request`` tells pytest to yield all
|
||||
parameters of a fixture.
|
||||
A new helper function named ``fixture_request`` would tell pytest to yield
|
||||
all parameters marked as a fixture.
|
||||
|
||||
.. note::
|
||||
|
||||
The `pytest-lazy-fixture <https://pypi.python.org/pypi/pytest-lazy-fixture>`_ plugin implements a very
|
||||
similar solution to the proposal below, make sure to check it out.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
|
||||
@@ -68,4 +68,9 @@ imported in the global import namespace.
|
||||
|
||||
This is also discussed in details in :ref:`test discovery`.
|
||||
|
||||
Invoking ``pytest`` versus ``python -m pytest``
|
||||
-----------------------------------------------
|
||||
|
||||
Running pytest with ``python -m pytest [...]`` instead of ``pytest [...]`` yields nearly
|
||||
equivalent behaviour, except that the former call will add the current directory to ``sys.path``.
|
||||
See also :ref:`cmdline`.
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
.. _skipping:
|
||||
|
||||
Skip and xfail: dealing with tests that cannot succeed
|
||||
=====================================================================
|
||||
======================================================
|
||||
|
||||
You can mark test functions that cannot be run on certain platforms
|
||||
or that you expect to fail so pytest can deal with them accordingly and
|
||||
@@ -16,13 +16,17 @@ resource which is not available at the moment (for example a database).
|
||||
|
||||
A **xfail** means that you expect a test to fail for some reason.
|
||||
A common example is a test for a feature not yet implemented, or a bug not yet fixed.
|
||||
When a test passes despite being expected to fail (marked with ``pytest.mark.xfail``),
|
||||
it's an **xpass** and will be reported in the test summary.
|
||||
|
||||
``pytest`` counts and lists *skip* and *xfail* tests separately. Detailed
|
||||
information about skipped/xfailed tests is not shown by default to avoid
|
||||
cluttering the output. You can use the ``-r`` option to see details
|
||||
corresponding to the "short" letters shown in the test progress::
|
||||
|
||||
pytest -rxs # show extra info on skips and xfails
|
||||
pytest -rxXs # show extra info on xfailed, xpassed, and skipped tests
|
||||
|
||||
More details on the ``-r`` option can be found by running ``pytest -h``.
|
||||
|
||||
(See :ref:`how to change command line options defaults`)
|
||||
|
||||
@@ -54,7 +58,7 @@ by calling the ``pytest.skip(reason)`` function:
|
||||
if not valid_config():
|
||||
pytest.skip("unsupported configuration")
|
||||
|
||||
The imperative method is useful when it is not possible to evaluate the skip condition
|
||||
The imperative method is useful when it is not possible to evaluate the skip condition
|
||||
during import time.
|
||||
|
||||
``skipif``
|
||||
@@ -73,7 +77,7 @@ when run on a Python3.3 interpreter::
|
||||
...
|
||||
|
||||
If the condition evaluates to ``True`` during collection, the test function will be skipped,
|
||||
with the specified reason appearing in the summary when using ``-rs``.
|
||||
with the specified reason appearing in the summary when using ``-rs``.
|
||||
|
||||
You can share ``skipif`` markers between modules. Consider this test module::
|
||||
|
||||
@@ -118,6 +122,12 @@ You can use the ``skipif`` marker (as any other marker) on classes::
|
||||
If the condition is ``True``, this marker will produce a skip result for
|
||||
each of the test methods of that class.
|
||||
|
||||
.. warning::
|
||||
|
||||
The use of ``skipif`` on classes that use inheritance is strongly
|
||||
discouraged. `A Known bug <https://github.com/pytest-dev/pytest/issues/568>`_
|
||||
in pytest's markers may cause unexpected behavior in super classes.
|
||||
|
||||
If you want to skip all test functions of a module, you may use
|
||||
the ``pytestmark`` name on the global level:
|
||||
|
||||
@@ -132,6 +142,16 @@ will be skipped if any of the skip conditions is true.
|
||||
.. _`whole class- or module level`: mark.html#scoped-marking
|
||||
|
||||
|
||||
Skipping files or directories
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Sometimes you may need to skip an entire file or directory, for example if the
|
||||
tests rely on Python version-specific features or contain code that you do not
|
||||
wish pytest to run. In this case, you must exclude the files and directories
|
||||
from collection. Refer to :ref:`customizing-test-collection` for more
|
||||
information.
|
||||
|
||||
|
||||
Skipping on a missing import dependency
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
@@ -346,5 +366,3 @@ test instances when using parametrize:
|
||||
])
|
||||
def test_increment(n, expected):
|
||||
assert n + 1 == expected
|
||||
|
||||
|
||||
|
||||
@@ -233,3 +233,13 @@ was executed ahead of the ``test_method``.
|
||||
overwrite ``unittest.TestCase`` ``__call__`` or ``run``, they need to
|
||||
to overwrite ``debug`` in the same way (this is also true for standard
|
||||
unittest).
|
||||
|
||||
.. note::
|
||||
|
||||
Due to architectural differences between the two frameworks, setup and
|
||||
teardown for ``unittest``-based tests is performed during the ``call`` phase
|
||||
of testing instead of in ``pytest``'s standard ``setup`` and ``teardown``
|
||||
stages. This can be important to understand in some situations, particularly
|
||||
when reasoning about errors. For example, if a ``unittest``-based suite
|
||||
exhibits errors during setup, ``pytest`` will report no errors during its
|
||||
``setup`` phase and will instead raise the error during ``call``.
|
||||
|
||||
@@ -17,7 +17,7 @@ You can invoke testing through the Python interpreter from the command line::
|
||||
python -m pytest [...]
|
||||
|
||||
This is almost equivalent to invoking the command line script ``pytest [...]``
|
||||
directly, except that Python will also add the current directory to ``sys.path``.
|
||||
directly, except that calling via ``python`` will also add the current directory to ``sys.path``.
|
||||
|
||||
Possible exit codes
|
||||
--------------------------------------------------------------
|
||||
@@ -311,6 +311,13 @@ Creating resultlog format files
|
||||
|
||||
This option is rarely used and is scheduled for removal in 4.0.
|
||||
|
||||
An alternative for users which still need similar functionality is to use the
|
||||
`pytest-tap <https://pypi.python.org/pypi/pytest-tap>`_ plugin which provides
|
||||
a stream of test data.
|
||||
|
||||
If you have any concerns, please don't hesitate to
|
||||
`open an issue <https://github.com/pytest-dev/pytest/issues>`_.
|
||||
|
||||
To create plain-text machine-readable result files you can issue::
|
||||
|
||||
pytest --resultlog=path
|
||||
|
||||
@@ -109,7 +109,7 @@ decorator or to all tests in a module by setting the ``pytestmark`` variable:
|
||||
.. code-block:: python
|
||||
|
||||
# turns all warnings into errors for this module
|
||||
pytestmark = @pytest.mark.filterwarnings('error')
|
||||
pytestmark = pytest.mark.filterwarnings('error')
|
||||
|
||||
|
||||
.. note::
|
||||
|
||||
@@ -57,9 +57,7 @@ Plugin discovery order at tool startup
|
||||
|
||||
.. _`pytest/plugin`: http://bitbucket.org/pytest-dev/pytest/src/tip/pytest/plugin/
|
||||
.. _`conftest.py plugins`:
|
||||
.. _`conftest.py`:
|
||||
.. _`localplugin`:
|
||||
.. _`conftest`:
|
||||
.. _`local conftest plugins`:
|
||||
|
||||
conftest.py: local per-directory plugins
|
||||
|
||||
@@ -5,4 +5,4 @@ if "%TOXENV%" == "coveralls" (
|
||||
exit /b 0
|
||||
)
|
||||
)
|
||||
C:\Python35\python -m tox
|
||||
C:\Python36\python -m tox
|
||||
|
||||
@@ -624,8 +624,10 @@ class TestInvocationVariants(object):
|
||||
for p in search_path:
|
||||
monkeypatch.syspath_prepend(p)
|
||||
|
||||
os.chdir('world')
|
||||
# mixed module and filenames:
|
||||
result = testdir.runpytest("--pyargs", "-v", "ns_pkg.hello", "world/ns_pkg")
|
||||
result = testdir.runpytest("--pyargs", "-v", "ns_pkg.hello", "ns_pkg/world")
|
||||
testdir.chdir()
|
||||
assert result.ret == 0
|
||||
result.stdout.fnmatch_lines([
|
||||
"*test_hello.py::test_hello*PASSED",
|
||||
|
||||
@@ -1,9 +1,11 @@
|
||||
# coding: utf-8
|
||||
from __future__ import absolute_import, division, print_function
|
||||
import sys
|
||||
|
||||
import _pytest._code
|
||||
import py
|
||||
import pytest
|
||||
from test_excinfo import TWMock
|
||||
|
||||
|
||||
def test_ne():
|
||||
@@ -172,3 +174,23 @@ class TestTracebackEntry(object):
|
||||
source = entry.getsource()
|
||||
assert len(source) == 6
|
||||
assert 'assert False' in source[5]
|
||||
|
||||
|
||||
class TestReprFuncArgs(object):
|
||||
|
||||
def test_not_raise_exception_with_mixed_encoding(self):
|
||||
from _pytest._code.code import ReprFuncArgs
|
||||
|
||||
tw = TWMock()
|
||||
|
||||
args = [
|
||||
('unicode_string', u"São Paulo"),
|
||||
('utf8_string', 'S\xc3\xa3o Paulo'),
|
||||
]
|
||||
|
||||
r = ReprFuncArgs(args)
|
||||
r.toterminal(tw)
|
||||
if sys.version_info[0] >= 3:
|
||||
assert tw.lines[0] == 'unicode_string = São Paulo, utf8_string = São Paulo'
|
||||
else:
|
||||
assert tw.lines[0] == 'unicode_string = São Paulo, utf8_string = São Paulo'
|
||||
|
||||
@@ -77,8 +77,8 @@ def test_excinfo_getstatement():
|
||||
linenumbers = [_pytest._code.getrawcode(f).co_firstlineno - 1 + 4,
|
||||
_pytest._code.getrawcode(f).co_firstlineno - 1 + 1,
|
||||
_pytest._code.getrawcode(g).co_firstlineno - 1 + 1, ]
|
||||
l = list(excinfo.traceback)
|
||||
foundlinenumbers = [x.lineno for x in l]
|
||||
values = list(excinfo.traceback)
|
||||
foundlinenumbers = [x.lineno for x in values]
|
||||
assert foundlinenumbers == linenumbers
|
||||
# for x in info:
|
||||
# print "%s:%d %s" %(x.path.relto(root), x.lineno, x.statement)
|
||||
@@ -244,7 +244,7 @@ class TestTraceback_f_g_h(object):
|
||||
def f(n):
|
||||
try:
|
||||
do_stuff()
|
||||
except:
|
||||
except: # noqa
|
||||
reraise_me()
|
||||
|
||||
excinfo = pytest.raises(RuntimeError, f, 8)
|
||||
@@ -434,7 +434,7 @@ class TestFormattedExcinfo(object):
|
||||
exec(source.compile())
|
||||
except KeyboardInterrupt:
|
||||
raise
|
||||
except:
|
||||
except: # noqa
|
||||
return _pytest._code.ExceptionInfo()
|
||||
assert 0, "did not raise"
|
||||
|
||||
@@ -1217,7 +1217,7 @@ def test_exception_repr_extraction_error_on_recursion():
|
||||
|
||||
try:
|
||||
a(numpy_like())
|
||||
except:
|
||||
except: # noqa
|
||||
from _pytest._code.code import ExceptionInfo
|
||||
from _pytest.pytester import LineMatcher
|
||||
exc_info = ExceptionInfo()
|
||||
@@ -1241,7 +1241,7 @@ def test_no_recursion_index_on_recursion_error():
|
||||
return getattr(self, '_' + attr)
|
||||
|
||||
RecursionDepthError().trigger
|
||||
except:
|
||||
except: # noqa
|
||||
from _pytest._code.code import ExceptionInfo
|
||||
exc_info = ExceptionInfo()
|
||||
if sys.version_info[:2] == (2, 6):
|
||||
|
||||
@@ -155,8 +155,8 @@ class TestAccesses(object):
|
||||
assert len(self.source) == 4
|
||||
|
||||
def test_iter(self):
|
||||
l = [x for x in self.source]
|
||||
assert len(l) == 4
|
||||
values = [x for x in self.source]
|
||||
assert len(values) == 4
|
||||
|
||||
|
||||
class TestSourceParsingAndCompiling(object):
|
||||
@@ -331,8 +331,8 @@ def test_getstartingblock_singleline():
|
||||
|
||||
x = A('x', 'y')
|
||||
|
||||
l = [i for i in x.source.lines if i.strip()]
|
||||
assert len(l) == 1
|
||||
values = [i for i in x.source.lines if i.strip()]
|
||||
assert len(values) == 1
|
||||
|
||||
|
||||
def test_getline_finally():
|
||||
@@ -391,7 +391,6 @@ def test_deindent():
|
||||
assert lines == ['', 'def f():', ' def g():', ' pass', ' ']
|
||||
|
||||
|
||||
@pytest.mark.xfail("sys.version_info[:3] < (2,7,0)")
|
||||
def test_source_of_class_at_eof_without_newline(tmpdir):
|
||||
# this test fails because the implicit inspect.getsource(A) below
|
||||
# does not return the "x = 1" last line.
|
||||
|
||||
@@ -22,5 +22,5 @@ def test_getstartingblock_multiline():
|
||||
,
|
||||
'z')
|
||||
|
||||
l = [i for i in x.source.lines if i.strip()]
|
||||
assert len(l) == 4
|
||||
values = [i for i in x.source.lines if i.strip()]
|
||||
assert len(values) == 4
|
||||
|
||||
@@ -78,4 +78,7 @@ def test_resultlog_is_deprecated(testdir):
|
||||
pass
|
||||
''')
|
||||
result = testdir.runpytest('--result-log=%s' % testdir.tmpdir.join('result.log'))
|
||||
result.stdout.fnmatch_lines(['*--result-log is deprecated and scheduled for removal in pytest 4.0*'])
|
||||
result.stdout.fnmatch_lines([
|
||||
'*--result-log is deprecated and scheduled for removal in pytest 4.0*',
|
||||
'*See https://docs.pytest.org/*/usage.html#creating-resultlog-format-files for more information*',
|
||||
])
|
||||
|
||||
@@ -147,11 +147,21 @@ class TestClass(object):
|
||||
])
|
||||
|
||||
def test_static_method(self, testdir):
|
||||
"""Support for collecting staticmethod tests (#2528, #2699)"""
|
||||
testdir.getmodulecol("""
|
||||
import pytest
|
||||
class Test(object):
|
||||
@staticmethod
|
||||
def test_something():
|
||||
pass
|
||||
|
||||
@pytest.fixture
|
||||
def fix(self):
|
||||
return 1
|
||||
|
||||
@staticmethod
|
||||
def test_fix(fix):
|
||||
assert fix == 1
|
||||
""")
|
||||
result = testdir.runpytest()
|
||||
if sys.version_info < (2, 7):
|
||||
@@ -162,8 +172,8 @@ class TestClass(object):
|
||||
])
|
||||
else:
|
||||
result.stdout.fnmatch_lines([
|
||||
"*collected 1 item*",
|
||||
"*1 passed in*",
|
||||
"*collected 2 items*",
|
||||
"*2 passed in*",
|
||||
])
|
||||
|
||||
def test_setup_teardown_class_as_classmethod(self, testdir):
|
||||
@@ -863,11 +873,11 @@ class TestConftestCustomization(object):
|
||||
|
||||
def test_makeitem_non_underscore(self, testdir, monkeypatch):
|
||||
modcol = testdir.getmodulecol("def _hello(): pass")
|
||||
l = []
|
||||
values = []
|
||||
monkeypatch.setattr(pytest.Module, 'makeitem',
|
||||
lambda self, name, obj: l.append(name))
|
||||
l = modcol.collect()
|
||||
assert '_hello' not in l
|
||||
lambda self, name, obj: values.append(name))
|
||||
values = modcol.collect()
|
||||
assert '_hello' not in values
|
||||
|
||||
def test_issue2369_collect_module_fileext(self, testdir):
|
||||
"""Ensure we can collect files with weird file extensions as Python
|
||||
|
||||
@@ -29,10 +29,16 @@ def test_getfuncargnames():
|
||||
def f(self, arg1, arg2="hello"):
|
||||
pass
|
||||
|
||||
@staticmethod
|
||||
def static(arg1, arg2):
|
||||
pass
|
||||
|
||||
assert fixtures.getfuncargnames(A().f) == ('arg1',)
|
||||
if sys.version_info < (3, 0):
|
||||
assert fixtures.getfuncargnames(A.f) == ('arg1',)
|
||||
|
||||
assert fixtures.getfuncargnames(A.static, cls=A) == ('arg1', 'arg2')
|
||||
|
||||
|
||||
class TestFillFixtures(object):
|
||||
def test_fillfuncargs_exposed(self):
|
||||
@@ -542,12 +548,12 @@ class TestRequestBasic(object):
|
||||
def test_getfixturevalue(self, testdir, getfixmethod):
|
||||
item = testdir.getitem("""
|
||||
import pytest
|
||||
l = [2]
|
||||
values = [2]
|
||||
@pytest.fixture
|
||||
def something(request): return 1
|
||||
@pytest.fixture
|
||||
def other(request):
|
||||
return l.pop()
|
||||
return values.pop()
|
||||
def test_func(something): pass
|
||||
""")
|
||||
import contextlib
|
||||
@@ -616,15 +622,15 @@ class TestRequestBasic(object):
|
||||
def test_request_addfinalizer_failing_setup(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = [1]
|
||||
values = [1]
|
||||
@pytest.fixture
|
||||
def myfix(request):
|
||||
request.addfinalizer(l.pop)
|
||||
request.addfinalizer(values.pop)
|
||||
assert 0
|
||||
def test_fix(myfix):
|
||||
pass
|
||||
def test_finalizer_ran():
|
||||
assert not l
|
||||
assert not values
|
||||
""")
|
||||
reprec = testdir.inline_run("-s")
|
||||
reprec.assertoutcome(failed=1, passed=1)
|
||||
@@ -632,30 +638,30 @@ class TestRequestBasic(object):
|
||||
def test_request_addfinalizer_failing_setup_module(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = [1, 2]
|
||||
values = [1, 2]
|
||||
@pytest.fixture(scope="module")
|
||||
def myfix(request):
|
||||
request.addfinalizer(l.pop)
|
||||
request.addfinalizer(l.pop)
|
||||
request.addfinalizer(values.pop)
|
||||
request.addfinalizer(values.pop)
|
||||
assert 0
|
||||
def test_fix(myfix):
|
||||
pass
|
||||
""")
|
||||
reprec = testdir.inline_run("-s")
|
||||
mod = reprec.getcalls("pytest_runtest_setup")[0].item.module
|
||||
assert not mod.l
|
||||
assert not mod.values
|
||||
|
||||
def test_request_addfinalizer_partial_setup_failure(self, testdir):
|
||||
p = testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture
|
||||
def something(request):
|
||||
request.addfinalizer(lambda: l.append(None))
|
||||
request.addfinalizer(lambda: values.append(None))
|
||||
def test_func(something, missingarg):
|
||||
pass
|
||||
def test_second():
|
||||
assert len(l) == 1
|
||||
assert len(values) == 1
|
||||
""")
|
||||
result = testdir.runpytest(p)
|
||||
result.stdout.fnmatch_lines([
|
||||
@@ -669,7 +675,7 @@ class TestRequestBasic(object):
|
||||
"""
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
def _excepts(where):
|
||||
raise Exception('Error in %s fixture' % where)
|
||||
@pytest.fixture
|
||||
@@ -677,17 +683,17 @@ class TestRequestBasic(object):
|
||||
return request
|
||||
@pytest.fixture
|
||||
def something(subrequest):
|
||||
subrequest.addfinalizer(lambda: l.append(1))
|
||||
subrequest.addfinalizer(lambda: l.append(2))
|
||||
subrequest.addfinalizer(lambda: values.append(1))
|
||||
subrequest.addfinalizer(lambda: values.append(2))
|
||||
subrequest.addfinalizer(lambda: _excepts('something'))
|
||||
@pytest.fixture
|
||||
def excepts(subrequest):
|
||||
subrequest.addfinalizer(lambda: _excepts('excepts'))
|
||||
subrequest.addfinalizer(lambda: l.append(3))
|
||||
subrequest.addfinalizer(lambda: values.append(3))
|
||||
def test_first(something, excepts):
|
||||
pass
|
||||
def test_second():
|
||||
assert l == [3, 2, 1]
|
||||
assert values == [3, 2, 1]
|
||||
""")
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines([
|
||||
@@ -742,13 +748,13 @@ class TestRequestBasic(object):
|
||||
def test_setupdecorator_and_xunit(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture(scope='module', autouse=True)
|
||||
def setup_module():
|
||||
l.append("module")
|
||||
values.append("module")
|
||||
@pytest.fixture(autouse=True)
|
||||
def setup_function():
|
||||
l.append("function")
|
||||
values.append("function")
|
||||
|
||||
def test_func():
|
||||
pass
|
||||
@@ -756,14 +762,14 @@ class TestRequestBasic(object):
|
||||
class TestClass(object):
|
||||
@pytest.fixture(scope="class", autouse=True)
|
||||
def setup_class(self):
|
||||
l.append("class")
|
||||
values.append("class")
|
||||
@pytest.fixture(autouse=True)
|
||||
def setup_method(self):
|
||||
l.append("method")
|
||||
values.append("method")
|
||||
def test_method(self):
|
||||
pass
|
||||
def test_all():
|
||||
assert l == ["module", "function", "class",
|
||||
assert values == ["module", "function", "class",
|
||||
"function", "method", "function"]
|
||||
""")
|
||||
reprec = testdir.inline_run("-v")
|
||||
@@ -924,10 +930,10 @@ class TestRequestCachedSetup(object):
|
||||
def test_request_cachedsetup_extrakey(self, testdir):
|
||||
item1 = testdir.getitem("def test_func(): pass")
|
||||
req1 = fixtures.FixtureRequest(item1)
|
||||
l = ["hello", "world"]
|
||||
values = ["hello", "world"]
|
||||
|
||||
def setup():
|
||||
return l.pop()
|
||||
return values.pop()
|
||||
|
||||
ret1 = req1.cached_setup(setup, extrakey=1)
|
||||
ret2 = req1.cached_setup(setup, extrakey=2)
|
||||
@@ -941,24 +947,24 @@ class TestRequestCachedSetup(object):
|
||||
def test_request_cachedsetup_cache_deletion(self, testdir):
|
||||
item1 = testdir.getitem("def test_func(): pass")
|
||||
req1 = fixtures.FixtureRequest(item1)
|
||||
l = []
|
||||
values = []
|
||||
|
||||
def setup():
|
||||
l.append("setup")
|
||||
values.append("setup")
|
||||
|
||||
def teardown(val):
|
||||
l.append("teardown")
|
||||
values.append("teardown")
|
||||
|
||||
req1.cached_setup(setup, teardown, scope="function")
|
||||
assert l == ['setup']
|
||||
assert values == ['setup']
|
||||
# artificial call of finalizer
|
||||
setupstate = req1._pyfuncitem.session._setupstate
|
||||
setupstate._callfinalizers(item1)
|
||||
assert l == ["setup", "teardown"]
|
||||
assert values == ["setup", "teardown"]
|
||||
req1.cached_setup(setup, teardown, scope="function")
|
||||
assert l == ["setup", "teardown", "setup"]
|
||||
assert values == ["setup", "teardown", "setup"]
|
||||
setupstate._callfinalizers(item1)
|
||||
assert l == ["setup", "teardown", "setup", "teardown"]
|
||||
assert values == ["setup", "teardown", "setup", "teardown"]
|
||||
|
||||
def test_request_cached_setup_two_args(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
@@ -1000,17 +1006,17 @@ class TestRequestCachedSetup(object):
|
||||
def test_request_cached_setup_functional(self, testdir):
|
||||
testdir.makepyfile(test_0="""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture
|
||||
def something(request):
|
||||
val = request.cached_setup(fsetup, fteardown)
|
||||
return val
|
||||
def fsetup(mycache=[1]):
|
||||
l.append(mycache.pop())
|
||||
return l
|
||||
values.append(mycache.pop())
|
||||
return values
|
||||
def fteardown(something):
|
||||
l.remove(something[0])
|
||||
l.append(2)
|
||||
values.remove(something[0])
|
||||
values.append(2)
|
||||
def test_list_once(something):
|
||||
assert something == [1]
|
||||
def test_list_twice(something):
|
||||
@@ -1019,7 +1025,7 @@ class TestRequestCachedSetup(object):
|
||||
testdir.makepyfile(test_1="""
|
||||
import test_0 # should have run already
|
||||
def test_check_test0_has_teardown_correct():
|
||||
assert test_0.l == [2]
|
||||
assert test_0.values == [2]
|
||||
""")
|
||||
result = testdir.runpytest("-v")
|
||||
result.stdout.fnmatch_lines([
|
||||
@@ -1144,10 +1150,10 @@ class TestFixtureUsages(object):
|
||||
def test_funcarg_parametrized_and_used_twice(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture(params=[1,2])
|
||||
def arg1(request):
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
return request.param
|
||||
|
||||
@pytest.fixture()
|
||||
@@ -1156,7 +1162,7 @@ class TestFixtureUsages(object):
|
||||
|
||||
def test_add(arg1, arg2):
|
||||
assert arg2 == arg1 + 1
|
||||
assert len(l) == arg1
|
||||
assert len(values) == arg1
|
||||
""")
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines([
|
||||
@@ -1197,8 +1203,8 @@ class TestFixtureUsages(object):
|
||||
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
l = reprec.getfailedcollections()
|
||||
assert len(l) == 1
|
||||
values = reprec.getfailedcollections()
|
||||
assert len(values) == 1
|
||||
|
||||
def test_request_can_be_overridden(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
@@ -1217,20 +1223,20 @@ class TestFixtureUsages(object):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
l = []
|
||||
values = []
|
||||
|
||||
@pytest.fixture(scope="class")
|
||||
def myfix(request):
|
||||
request.cls.hello = "world"
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
|
||||
class TestClass(object):
|
||||
def test_one(self):
|
||||
assert self.hello == "world"
|
||||
assert len(l) == 1
|
||||
assert len(values) == 1
|
||||
def test_two(self):
|
||||
assert self.hello == "world"
|
||||
assert len(l) == 1
|
||||
assert len(values) == 1
|
||||
pytest.mark.usefixtures("myfix")(TestClass)
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
@@ -1284,7 +1290,7 @@ class TestFixtureUsages(object):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
l = []
|
||||
values = []
|
||||
def f():
|
||||
yield 1
|
||||
yield 2
|
||||
@@ -1298,14 +1304,14 @@ class TestFixtureUsages(object):
|
||||
return request.param
|
||||
|
||||
def test_1(arg):
|
||||
l.append(arg)
|
||||
values.append(arg)
|
||||
def test_2(arg2):
|
||||
l.append(arg2*10)
|
||||
values.append(arg2*10)
|
||||
""")
|
||||
reprec = testdir.inline_run("-v")
|
||||
reprec.assertoutcome(passed=4)
|
||||
l = reprec.getcalls("pytest_runtest_call")[0].item.module.l
|
||||
assert l == [1, 2, 10, 20]
|
||||
values = reprec.getcalls("pytest_runtest_call")[0].item.module.values
|
||||
assert values == [1, 2, 10, 20]
|
||||
|
||||
|
||||
class TestFixtureManagerParseFactories(object):
|
||||
@@ -1455,19 +1461,19 @@ class TestAutouseDiscovery(object):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
class TestA(object):
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture(autouse=True)
|
||||
def setup1(self):
|
||||
self.l.append(1)
|
||||
self.values.append(1)
|
||||
def test_setup1(self):
|
||||
assert self.l == [1]
|
||||
assert self.values == [1]
|
||||
class TestB(object):
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture(autouse=True)
|
||||
def setup2(self):
|
||||
self.l.append(1)
|
||||
self.values.append(1)
|
||||
def test_setup2(self):
|
||||
assert self.l == [1]
|
||||
assert self.values == [1]
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(passed=2)
|
||||
@@ -1550,22 +1556,22 @@ class TestAutouseDiscovery(object):
|
||||
def test_autouse_in_module_and_two_classes(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture(autouse=True)
|
||||
def append1():
|
||||
l.append("module")
|
||||
values.append("module")
|
||||
def test_x():
|
||||
assert l == ["module"]
|
||||
assert values == ["module"]
|
||||
|
||||
class TestA(object):
|
||||
@pytest.fixture(autouse=True)
|
||||
def append2(self):
|
||||
l.append("A")
|
||||
values.append("A")
|
||||
def test_hello(self):
|
||||
assert l == ["module", "module", "A"], l
|
||||
assert values == ["module", "module", "A"], values
|
||||
class TestA2(object):
|
||||
def test_world(self):
|
||||
assert l == ["module", "module", "A", "module"], l
|
||||
assert values == ["module", "module", "A", "module"], values
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(passed=3)
|
||||
@@ -1609,23 +1615,23 @@ class TestAutouseManagement(object):
|
||||
def test_funcarg_and_setup(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture(scope="module")
|
||||
def arg():
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
return 0
|
||||
@pytest.fixture(scope="module", autouse=True)
|
||||
def something(arg):
|
||||
l.append(2)
|
||||
values.append(2)
|
||||
|
||||
def test_hello(arg):
|
||||
assert len(l) == 2
|
||||
assert l == [1,2]
|
||||
assert len(values) == 2
|
||||
assert values == [1,2]
|
||||
assert arg == 0
|
||||
|
||||
def test_hello2(arg):
|
||||
assert len(l) == 2
|
||||
assert l == [1,2]
|
||||
assert len(values) == 2
|
||||
assert values == [1,2]
|
||||
assert arg == 0
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
@@ -1634,20 +1640,20 @@ class TestAutouseManagement(object):
|
||||
def test_uses_parametrized_resource(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture(params=[1,2])
|
||||
def arg(request):
|
||||
return request.param
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def something(arg):
|
||||
l.append(arg)
|
||||
values.append(arg)
|
||||
|
||||
def test_hello():
|
||||
if len(l) == 1:
|
||||
assert l == [1]
|
||||
elif len(l) == 2:
|
||||
assert l == [1, 2]
|
||||
if len(values) == 1:
|
||||
assert values == [1]
|
||||
elif len(values) == 2:
|
||||
assert values == [1, 2]
|
||||
else:
|
||||
0/0
|
||||
|
||||
@@ -1659,7 +1665,7 @@ class TestAutouseManagement(object):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
l = []
|
||||
values = []
|
||||
|
||||
@pytest.fixture(scope="session", params=[1,2])
|
||||
def arg(request):
|
||||
@@ -1668,14 +1674,14 @@ class TestAutouseManagement(object):
|
||||
@pytest.fixture(scope="function", autouse=True)
|
||||
def append(request, arg):
|
||||
if request.function.__name__ == "test_some":
|
||||
l.append(arg)
|
||||
values.append(arg)
|
||||
|
||||
def test_some():
|
||||
pass
|
||||
|
||||
def test_result(arg):
|
||||
assert len(l) == arg
|
||||
assert l[:arg] == [1,2][:arg]
|
||||
assert len(values) == arg
|
||||
assert values[:arg] == [1,2][:arg]
|
||||
""")
|
||||
reprec = testdir.inline_run("-v", "-s")
|
||||
reprec.assertoutcome(passed=4)
|
||||
@@ -1685,7 +1691,7 @@ class TestAutouseManagement(object):
|
||||
import pytest
|
||||
import pprint
|
||||
|
||||
l = []
|
||||
values = []
|
||||
|
||||
@pytest.fixture(scope="function", params=[1,2])
|
||||
def farg(request):
|
||||
@@ -1698,7 +1704,7 @@ class TestAutouseManagement(object):
|
||||
@pytest.fixture(scope="function", autouse=True)
|
||||
def append(request, farg, carg):
|
||||
def fin():
|
||||
l.append("fin_%s%s" % (carg, farg))
|
||||
values.append("fin_%s%s" % (carg, farg))
|
||||
request.addfinalizer(fin)
|
||||
""")
|
||||
testdir.makepyfile("""
|
||||
@@ -1715,26 +1721,26 @@ class TestAutouseManagement(object):
|
||||
reprec = testdir.inline_run("-v", "-s", confcut)
|
||||
reprec.assertoutcome(passed=8)
|
||||
config = reprec.getcalls("pytest_unconfigure")[0].config
|
||||
l = config.pluginmanager._getconftestmodules(p)[0].l
|
||||
assert l == ["fin_a1", "fin_a2", "fin_b1", "fin_b2"] * 2
|
||||
values = config.pluginmanager._getconftestmodules(p)[0].values
|
||||
assert values == ["fin_a1", "fin_a2", "fin_b1", "fin_b2"] * 2
|
||||
|
||||
def test_scope_ordering(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture(scope="function", autouse=True)
|
||||
def fappend2():
|
||||
l.append(2)
|
||||
values.append(2)
|
||||
@pytest.fixture(scope="class", autouse=True)
|
||||
def classappend3():
|
||||
l.append(3)
|
||||
values.append(3)
|
||||
@pytest.fixture(scope="module", autouse=True)
|
||||
def mappend():
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
|
||||
class TestHallo(object):
|
||||
def test_method(self):
|
||||
assert l == [1,3,2]
|
||||
assert values == [1,3,2]
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(passed=1)
|
||||
@@ -1742,23 +1748,23 @@ class TestAutouseManagement(object):
|
||||
def test_parametrization_setup_teardown_ordering(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
def pytest_generate_tests(metafunc):
|
||||
if metafunc.cls is not None:
|
||||
metafunc.parametrize("item", [1,2], scope="class")
|
||||
class TestClass(object):
|
||||
@pytest.fixture(scope="class", autouse=True)
|
||||
def addteardown(self, item, request):
|
||||
l.append("setup-%d" % item)
|
||||
request.addfinalizer(lambda: l.append("teardown-%d" % item))
|
||||
values.append("setup-%d" % item)
|
||||
request.addfinalizer(lambda: values.append("teardown-%d" % item))
|
||||
def test_step1(self, item):
|
||||
l.append("step1-%d" % item)
|
||||
values.append("step1-%d" % item)
|
||||
def test_step2(self, item):
|
||||
l.append("step2-%d" % item)
|
||||
values.append("step2-%d" % item)
|
||||
|
||||
def test_finish():
|
||||
print (l)
|
||||
assert l == ["setup-1", "step1-1", "step2-1", "teardown-1",
|
||||
print (values)
|
||||
assert values == ["setup-1", "step1-1", "step2-1", "teardown-1",
|
||||
"setup-2", "step1-2", "step2-2", "teardown-2",]
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
@@ -1768,15 +1774,15 @@ class TestAutouseManagement(object):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture(autouse=True)
|
||||
def fix1():
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
@pytest.fixture()
|
||||
def arg1():
|
||||
l.append(2)
|
||||
values.append(2)
|
||||
def test_hello(arg1):
|
||||
assert l == [1,2]
|
||||
assert values == [1,2]
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(passed=1)
|
||||
@@ -1787,20 +1793,20 @@ class TestAutouseManagement(object):
|
||||
def test_ordering_dependencies_torndown_first(self, testdir, param1, param2):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture(%(param1)s)
|
||||
def arg1(request):
|
||||
request.addfinalizer(lambda: l.append("fin1"))
|
||||
l.append("new1")
|
||||
request.addfinalizer(lambda: values.append("fin1"))
|
||||
values.append("new1")
|
||||
@pytest.fixture(%(param2)s)
|
||||
def arg2(request, arg1):
|
||||
request.addfinalizer(lambda: l.append("fin2"))
|
||||
l.append("new2")
|
||||
request.addfinalizer(lambda: values.append("fin2"))
|
||||
values.append("new2")
|
||||
|
||||
def test_arg(arg2):
|
||||
pass
|
||||
def test_check():
|
||||
assert l == ["new1", "new2", "fin2", "fin1"]
|
||||
assert values == ["new1", "new2", "fin2", "fin1"]
|
||||
""" % locals())
|
||||
reprec = testdir.inline_run("-s")
|
||||
reprec.assertoutcome(passed=2)
|
||||
@@ -1813,11 +1819,11 @@ class TestFixtureMarker(object):
|
||||
@pytest.fixture(params=["a", "b", "c"])
|
||||
def arg(request):
|
||||
return request.param
|
||||
l = []
|
||||
values = []
|
||||
def test_param(arg):
|
||||
l.append(arg)
|
||||
values.append(arg)
|
||||
def test_result():
|
||||
assert l == list("abc")
|
||||
assert values == list("abc")
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(passed=4)
|
||||
@@ -1861,21 +1867,21 @@ class TestFixtureMarker(object):
|
||||
def test_scope_session(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture(scope="module")
|
||||
def arg():
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
return 1
|
||||
|
||||
def test_1(arg):
|
||||
assert arg == 1
|
||||
def test_2(arg):
|
||||
assert arg == 1
|
||||
assert len(l) == 1
|
||||
assert len(values) == 1
|
||||
class TestClass(object):
|
||||
def test3(self, arg):
|
||||
assert arg == 1
|
||||
assert len(l) == 1
|
||||
assert len(values) == 1
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(passed=3)
|
||||
@@ -1883,10 +1889,10 @@ class TestFixtureMarker(object):
|
||||
def test_scope_session_exc(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture(scope="session")
|
||||
def fix():
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
pytest.skip('skipping')
|
||||
|
||||
def test_1(fix):
|
||||
@@ -1894,7 +1900,7 @@ class TestFixtureMarker(object):
|
||||
def test_2(fix):
|
||||
pass
|
||||
def test_last():
|
||||
assert l == [1]
|
||||
assert values == [1]
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(skipped=2, passed=1)
|
||||
@@ -1902,11 +1908,11 @@ class TestFixtureMarker(object):
|
||||
def test_scope_session_exc_two_fix(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
m = []
|
||||
@pytest.fixture(scope="session")
|
||||
def a():
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
pytest.skip('skipping')
|
||||
@pytest.fixture(scope="session")
|
||||
def b(a):
|
||||
@@ -1917,7 +1923,7 @@ class TestFixtureMarker(object):
|
||||
def test_2(b):
|
||||
pass
|
||||
def test_last():
|
||||
assert l == [1]
|
||||
assert values == [1]
|
||||
assert m == []
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
@@ -1955,21 +1961,21 @@ class TestFixtureMarker(object):
|
||||
def test_scope_module_uses_session(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture(scope="module")
|
||||
def arg():
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
return 1
|
||||
|
||||
def test_1(arg):
|
||||
assert arg == 1
|
||||
def test_2(arg):
|
||||
assert arg == 1
|
||||
assert len(l) == 1
|
||||
assert len(values) == 1
|
||||
class TestClass(object):
|
||||
def test3(self, arg):
|
||||
assert arg == 1
|
||||
assert len(l) == 1
|
||||
assert len(values) == 1
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(passed=3)
|
||||
@@ -2064,17 +2070,17 @@ class TestFixtureMarker(object):
|
||||
@pytest.fixture(scope="module", params=["a", "b", "c"])
|
||||
def arg(request):
|
||||
return request.param
|
||||
l = []
|
||||
values = []
|
||||
def test_param(arg):
|
||||
l.append(arg)
|
||||
values.append(arg)
|
||||
""")
|
||||
reprec = testdir.inline_run("-v")
|
||||
reprec.assertoutcome(passed=3)
|
||||
l = reprec.getcalls("pytest_runtest_call")[0].item.module.l
|
||||
assert len(l) == 3
|
||||
assert "a" in l
|
||||
assert "b" in l
|
||||
assert "c" in l
|
||||
values = reprec.getcalls("pytest_runtest_call")[0].item.module.values
|
||||
assert len(values) == 3
|
||||
assert "a" in values
|
||||
assert "b" in values
|
||||
assert "c" in values
|
||||
|
||||
def test_scope_mismatch(self, testdir):
|
||||
testdir.makeconftest("""
|
||||
@@ -2105,16 +2111,16 @@ class TestFixtureMarker(object):
|
||||
def arg(request):
|
||||
return request.param
|
||||
|
||||
l = []
|
||||
values = []
|
||||
def test_1(arg):
|
||||
l.append(arg)
|
||||
values.append(arg)
|
||||
def test_2(arg):
|
||||
l.append(arg)
|
||||
values.append(arg)
|
||||
""")
|
||||
reprec = testdir.inline_run("-v")
|
||||
reprec.assertoutcome(passed=4)
|
||||
l = reprec.getcalls("pytest_runtest_call")[0].item.module.l
|
||||
assert l == [1, 1, 2, 2]
|
||||
values = reprec.getcalls("pytest_runtest_call")[0].item.module.values
|
||||
assert values == [1, 1, 2, 2]
|
||||
|
||||
def test_module_parametrized_ordering(self, testdir):
|
||||
testdir.makeconftest("""
|
||||
@@ -2166,7 +2172,7 @@ class TestFixtureMarker(object):
|
||||
testdir.makeconftest("""
|
||||
import pytest
|
||||
|
||||
l = []
|
||||
values = []
|
||||
|
||||
@pytest.fixture(scope="function", params=[1,2])
|
||||
def farg(request):
|
||||
@@ -2179,7 +2185,7 @@ class TestFixtureMarker(object):
|
||||
@pytest.fixture(scope="function", autouse=True)
|
||||
def append(request, farg, carg):
|
||||
def fin():
|
||||
l.append("fin_%s%s" % (carg, farg))
|
||||
values.append("fin_%s%s" % (carg, farg))
|
||||
request.addfinalizer(fin)
|
||||
""")
|
||||
testdir.makepyfile("""
|
||||
@@ -2217,30 +2223,30 @@ class TestFixtureMarker(object):
|
||||
@pytest.fixture(scope="function", params=[1, 2])
|
||||
def arg(request):
|
||||
param = request.param
|
||||
request.addfinalizer(lambda: l.append("fin:%s" % param))
|
||||
l.append("create:%s" % param)
|
||||
request.addfinalizer(lambda: values.append("fin:%s" % param))
|
||||
values.append("create:%s" % param)
|
||||
return request.param
|
||||
|
||||
@pytest.fixture(scope="module", params=["mod1", "mod2"])
|
||||
def modarg(request):
|
||||
param = request.param
|
||||
request.addfinalizer(lambda: l.append("fin:%s" % param))
|
||||
l.append("create:%s" % param)
|
||||
request.addfinalizer(lambda: values.append("fin:%s" % param))
|
||||
values.append("create:%s" % param)
|
||||
return request.param
|
||||
|
||||
l = []
|
||||
values = []
|
||||
def test_1(arg):
|
||||
l.append("test1")
|
||||
values.append("test1")
|
||||
def test_2(modarg):
|
||||
l.append("test2")
|
||||
values.append("test2")
|
||||
def test_3(arg, modarg):
|
||||
l.append("test3")
|
||||
values.append("test3")
|
||||
def test_4(modarg, arg):
|
||||
l.append("test4")
|
||||
values.append("test4")
|
||||
""")
|
||||
reprec = testdir.inline_run("-v")
|
||||
reprec.assertoutcome(passed=12)
|
||||
l = reprec.getcalls("pytest_runtest_call")[0].item.module.l
|
||||
values = reprec.getcalls("pytest_runtest_call")[0].item.module.values
|
||||
expected = [
|
||||
'create:1', 'test1', 'fin:1', 'create:2', 'test1',
|
||||
'fin:2', 'create:mod1', 'test2', 'create:1', 'test3',
|
||||
@@ -2251,8 +2257,8 @@ class TestFixtureMarker(object):
|
||||
'test4', 'fin:1', 'create:2', 'test4', 'fin:2',
|
||||
'fin:mod2']
|
||||
import pprint
|
||||
pprint.pprint(list(zip(l, expected)))
|
||||
assert l == expected
|
||||
pprint.pprint(list(zip(values, expected)))
|
||||
assert values == expected
|
||||
|
||||
def test_parametrized_fixture_teardown_order(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
@@ -2261,29 +2267,29 @@ class TestFixtureMarker(object):
|
||||
def param1(request):
|
||||
return request.param
|
||||
|
||||
l = []
|
||||
values = []
|
||||
|
||||
class TestClass(object):
|
||||
@classmethod
|
||||
@pytest.fixture(scope="class", autouse=True)
|
||||
def setup1(self, request, param1):
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
request.addfinalizer(self.teardown1)
|
||||
@classmethod
|
||||
def teardown1(self):
|
||||
assert l.pop() == 1
|
||||
assert values.pop() == 1
|
||||
@pytest.fixture(scope="class", autouse=True)
|
||||
def setup2(self, request, param1):
|
||||
l.append(2)
|
||||
values.append(2)
|
||||
request.addfinalizer(self.teardown2)
|
||||
@classmethod
|
||||
def teardown2(self):
|
||||
assert l.pop() == 2
|
||||
assert values.pop() == 2
|
||||
def test(self):
|
||||
pass
|
||||
|
||||
def test_finish():
|
||||
assert not l
|
||||
assert not values
|
||||
""")
|
||||
result = testdir.runpytest("-v")
|
||||
result.stdout.fnmatch_lines("""
|
||||
@@ -2348,42 +2354,42 @@ class TestFixtureMarker(object):
|
||||
def test_request_is_clean(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture(params=[1, 2])
|
||||
def fix(request):
|
||||
request.addfinalizer(lambda: l.append(request.param))
|
||||
request.addfinalizer(lambda: values.append(request.param))
|
||||
def test_fix(fix):
|
||||
pass
|
||||
""")
|
||||
reprec = testdir.inline_run("-s")
|
||||
l = reprec.getcalls("pytest_runtest_call")[0].item.module.l
|
||||
assert l == [1, 2]
|
||||
values = reprec.getcalls("pytest_runtest_call")[0].item.module.values
|
||||
assert values == [1, 2]
|
||||
|
||||
def test_parametrize_separated_lifecycle(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture(scope="module", params=[1, 2])
|
||||
def arg(request):
|
||||
x = request.param
|
||||
request.addfinalizer(lambda: l.append("fin%s" % x))
|
||||
request.addfinalizer(lambda: values.append("fin%s" % x))
|
||||
return request.param
|
||||
def test_1(arg):
|
||||
l.append(arg)
|
||||
values.append(arg)
|
||||
def test_2(arg):
|
||||
l.append(arg)
|
||||
values.append(arg)
|
||||
""")
|
||||
reprec = testdir.inline_run("-vs")
|
||||
reprec.assertoutcome(passed=4)
|
||||
l = reprec.getcalls("pytest_runtest_call")[0].item.module.l
|
||||
values = reprec.getcalls("pytest_runtest_call")[0].item.module.values
|
||||
import pprint
|
||||
pprint.pprint(l)
|
||||
# assert len(l) == 6
|
||||
assert l[0] == l[1] == 1
|
||||
assert l[2] == "fin1"
|
||||
assert l[3] == l[4] == 2
|
||||
assert l[5] == "fin2"
|
||||
pprint.pprint(values)
|
||||
# assert len(values) == 6
|
||||
assert values[0] == values[1] == 1
|
||||
assert values[2] == "fin1"
|
||||
assert values[3] == values[4] == 2
|
||||
assert values[5] == "fin2"
|
||||
|
||||
def test_parametrize_function_scoped_finalizers_called(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
@@ -2392,17 +2398,17 @@ class TestFixtureMarker(object):
|
||||
@pytest.fixture(scope="function", params=[1, 2])
|
||||
def arg(request):
|
||||
x = request.param
|
||||
request.addfinalizer(lambda: l.append("fin%s" % x))
|
||||
request.addfinalizer(lambda: values.append("fin%s" % x))
|
||||
return request.param
|
||||
|
||||
l = []
|
||||
values = []
|
||||
def test_1(arg):
|
||||
l.append(arg)
|
||||
values.append(arg)
|
||||
def test_2(arg):
|
||||
l.append(arg)
|
||||
values.append(arg)
|
||||
def test_3():
|
||||
assert len(l) == 8
|
||||
assert l == [1, "fin1", 2, "fin2", 1, "fin1", 2, "fin2"]
|
||||
assert len(values) == 8
|
||||
assert values == [1, "fin1", 2, "fin2", 1, "fin1", 2, "fin2"]
|
||||
""")
|
||||
reprec = testdir.inline_run("-v")
|
||||
reprec.assertoutcome(passed=5)
|
||||
@@ -2412,7 +2418,7 @@ class TestFixtureMarker(object):
|
||||
def test_finalizer_order_on_parametrization(self, scope, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
|
||||
@pytest.fixture(scope=%(scope)r, params=["1"])
|
||||
def fix1(request):
|
||||
@@ -2421,13 +2427,13 @@ class TestFixtureMarker(object):
|
||||
@pytest.fixture(scope=%(scope)r)
|
||||
def fix2(request, base):
|
||||
def cleanup_fix2():
|
||||
assert not l, "base should not have been finalized"
|
||||
assert not values, "base should not have been finalized"
|
||||
request.addfinalizer(cleanup_fix2)
|
||||
|
||||
@pytest.fixture(scope=%(scope)r)
|
||||
def base(request, fix1):
|
||||
def cleanup_base():
|
||||
l.append("fin_base")
|
||||
values.append("fin_base")
|
||||
print ("finalizing base")
|
||||
request.addfinalizer(cleanup_base)
|
||||
|
||||
@@ -2445,29 +2451,29 @@ class TestFixtureMarker(object):
|
||||
def test_class_scope_parametrization_ordering(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
@pytest.fixture(params=["John", "Doe"], scope="class")
|
||||
def human(request):
|
||||
request.addfinalizer(lambda: l.append("fin %s" % request.param))
|
||||
request.addfinalizer(lambda: values.append("fin %s" % request.param))
|
||||
return request.param
|
||||
|
||||
class TestGreetings(object):
|
||||
def test_hello(self, human):
|
||||
l.append("test_hello")
|
||||
values.append("test_hello")
|
||||
|
||||
class TestMetrics(object):
|
||||
def test_name(self, human):
|
||||
l.append("test_name")
|
||||
values.append("test_name")
|
||||
|
||||
def test_population(self, human):
|
||||
l.append("test_population")
|
||||
values.append("test_population")
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(passed=6)
|
||||
l = reprec.getcalls("pytest_runtest_call")[0].item.module.l
|
||||
assert l == ["test_hello", "fin John", "test_hello", "fin Doe",
|
||||
"test_name", "test_population", "fin John",
|
||||
"test_name", "test_population", "fin Doe"]
|
||||
values = reprec.getcalls("pytest_runtest_call")[0].item.module.values
|
||||
assert values == ["test_hello", "fin John", "test_hello", "fin Doe",
|
||||
"test_name", "test_population", "fin John",
|
||||
"test_name", "test_population", "fin Doe"]
|
||||
|
||||
def test_parametrize_setup_function(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
@@ -2479,21 +2485,21 @@ class TestFixtureMarker(object):
|
||||
|
||||
@pytest.fixture(scope="module", autouse=True)
|
||||
def mysetup(request, arg):
|
||||
request.addfinalizer(lambda: l.append("fin%s" % arg))
|
||||
l.append("setup%s" % arg)
|
||||
request.addfinalizer(lambda: values.append("fin%s" % arg))
|
||||
values.append("setup%s" % arg)
|
||||
|
||||
l = []
|
||||
values = []
|
||||
def test_1(arg):
|
||||
l.append(arg)
|
||||
values.append(arg)
|
||||
def test_2(arg):
|
||||
l.append(arg)
|
||||
values.append(arg)
|
||||
def test_3():
|
||||
import pprint
|
||||
pprint.pprint(l)
|
||||
pprint.pprint(values)
|
||||
if arg == 1:
|
||||
assert l == ["setup1", 1, 1, ]
|
||||
assert values == ["setup1", 1, 1, ]
|
||||
elif arg == 2:
|
||||
assert l == ["setup1", 1, 1, "fin1",
|
||||
assert values == ["setup1", 1, 1, "fin1",
|
||||
"setup2", 2, 2, ]
|
||||
|
||||
""")
|
||||
@@ -2654,13 +2660,13 @@ class TestErrors(object):
|
||||
request.addfinalizer(f)
|
||||
return object()
|
||||
|
||||
l = []
|
||||
values = []
|
||||
def test_1(fix1):
|
||||
l.append(fix1)
|
||||
values.append(fix1)
|
||||
def test_2(fix1):
|
||||
l.append(fix1)
|
||||
values.append(fix1)
|
||||
def test_3():
|
||||
assert l[0] != l[1]
|
||||
assert values[0] != values[1]
|
||||
""")
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines("""
|
||||
|
||||
@@ -93,8 +93,8 @@ class TestMockDecoration(object):
|
||||
def f(x):
|
||||
pass
|
||||
|
||||
l = getfuncargnames(f)
|
||||
assert l == ("x",)
|
||||
values = getfuncargnames(f)
|
||||
assert values == ("x",)
|
||||
|
||||
def test_wrapped_getfuncargnames_patching(self):
|
||||
from _pytest.compat import getfuncargnames
|
||||
@@ -110,8 +110,8 @@ class TestMockDecoration(object):
|
||||
def f(x, y, z):
|
||||
pass
|
||||
|
||||
l = getfuncargnames(f)
|
||||
assert l == ("y", "z")
|
||||
values = getfuncargnames(f)
|
||||
assert values == ("y", "z")
|
||||
|
||||
def test_unittest_mock(self, testdir):
|
||||
pytest.importorskip("unittest.mock")
|
||||
|
||||
@@ -1071,21 +1071,21 @@ class TestMetafuncFunctional(object):
|
||||
def test_parametrize_scope_overrides(self, testdir, scope, length):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
def pytest_generate_tests(metafunc):
|
||||
if "arg" in metafunc.funcargnames:
|
||||
metafunc.parametrize("arg", [1,2], indirect=True,
|
||||
scope=%r)
|
||||
@pytest.fixture
|
||||
def arg(request):
|
||||
l.append(request.param)
|
||||
values.append(request.param)
|
||||
return request.param
|
||||
def test_hello(arg):
|
||||
assert arg in (1,2)
|
||||
def test_world(arg):
|
||||
assert arg in (1,2)
|
||||
def test_checklength():
|
||||
assert len(l) == %d
|
||||
assert len(values) == %d
|
||||
""" % (scope, length))
|
||||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(passed=5)
|
||||
|
||||
@@ -135,3 +135,24 @@ def test_verbose_include_private_fixtures_and_loc(testdir):
|
||||
'arg3 -- test_verbose_include_private_fixtures_and_loc.py:3',
|
||||
' arg3 from testmodule',
|
||||
])
|
||||
|
||||
|
||||
def test_doctest_items(testdir):
|
||||
testdir.makepyfile('''
|
||||
def foo():
|
||||
"""
|
||||
>>> 1 + 1
|
||||
2
|
||||
"""
|
||||
''')
|
||||
testdir.maketxtfile('''
|
||||
>>> 1 + 1
|
||||
2
|
||||
''')
|
||||
result = testdir.runpytest("--fixtures-per-test", "--doctest-modules",
|
||||
"--doctest-glob=*.txt", "-v")
|
||||
assert result.ret == 0
|
||||
|
||||
result.stdout.fnmatch_lines([
|
||||
'*collected 2 items*',
|
||||
])
|
||||
|
||||
@@ -82,7 +82,7 @@ class TestArgComplete(object):
|
||||
from _pytest._argcomplete import FastFilesCompleter
|
||||
ffc = FastFilesCompleter()
|
||||
fc = FilesCompleter()
|
||||
for x in '/ /d /data qqq'.split():
|
||||
for x in ['/', '/d', '/data', 'qqq', '']:
|
||||
assert equal_with_bash(x, ffc, fc, out=py.std.sys.stdout)
|
||||
|
||||
@pytest.mark.skipif("sys.platform in ('win32', 'darwin')")
|
||||
|
||||
@@ -229,9 +229,9 @@ class TestImportHookInstallation(object):
|
||||
return pkg.helper.tool
|
||||
""",
|
||||
'pkg/other.py': """
|
||||
l = [3, 2]
|
||||
values = [3, 2]
|
||||
def tool():
|
||||
assert l.pop() == 3
|
||||
assert values.pop() == 3
|
||||
""",
|
||||
'conftest.py': """
|
||||
pytest_plugins = ['pkg.plugin']
|
||||
@@ -248,7 +248,7 @@ class TestImportHookInstallation(object):
|
||||
result = testdir.runpytest_subprocess('--assert=rewrite')
|
||||
result.stdout.fnmatch_lines(['>*assert a == b*',
|
||||
'E*assert 2 == 3*',
|
||||
'>*assert l.pop() == 3*',
|
||||
'>*assert values.pop() == 3*',
|
||||
'E*AssertionError'])
|
||||
|
||||
def test_register_assert_rewrite_checks_types(self):
|
||||
@@ -263,13 +263,13 @@ class TestBinReprIntegration(object):
|
||||
def test_pytest_assertrepr_compare_called(self, testdir):
|
||||
testdir.makeconftest("""
|
||||
import pytest
|
||||
l = []
|
||||
values = []
|
||||
def pytest_assertrepr_compare(op, left, right):
|
||||
l.append((op, left, right))
|
||||
values.append((op, left, right))
|
||||
|
||||
@pytest.fixture
|
||||
def list(request):
|
||||
return l
|
||||
return values
|
||||
""")
|
||||
testdir.makepyfile("""
|
||||
def test_hello():
|
||||
|
||||
@@ -65,13 +65,18 @@ class TestAssertionRewrite(object):
|
||||
def test_place_initial_imports(self):
|
||||
s = """'Doc string'\nother = stuff"""
|
||||
m = rewrite(s)
|
||||
assert isinstance(m.body[0], ast.Expr)
|
||||
assert isinstance(m.body[0].value, ast.Str)
|
||||
for imp in m.body[1:3]:
|
||||
# Module docstrings in 3.7 are part of Module node, it's not in the body
|
||||
# so we remove it so the following body items have the same indexes on
|
||||
# all Python versions
|
||||
if sys.version_info < (3, 7):
|
||||
assert isinstance(m.body[0], ast.Expr)
|
||||
assert isinstance(m.body[0].value, ast.Str)
|
||||
del m.body[0]
|
||||
for imp in m.body[0:2]:
|
||||
assert isinstance(imp, ast.Import)
|
||||
assert imp.lineno == 2
|
||||
assert imp.col_offset == 0
|
||||
assert isinstance(m.body[3], ast.Assign)
|
||||
assert isinstance(m.body[2], ast.Assign)
|
||||
s = """from __future__ import with_statement\nother_stuff"""
|
||||
m = rewrite(s)
|
||||
assert isinstance(m.body[0], ast.ImportFrom)
|
||||
@@ -80,16 +85,29 @@ class TestAssertionRewrite(object):
|
||||
assert imp.lineno == 2
|
||||
assert imp.col_offset == 0
|
||||
assert isinstance(m.body[3], ast.Expr)
|
||||
s = """'doc string'\nfrom __future__ import with_statement"""
|
||||
m = rewrite(s)
|
||||
if sys.version_info < (3, 7):
|
||||
assert isinstance(m.body[0], ast.Expr)
|
||||
assert isinstance(m.body[0].value, ast.Str)
|
||||
del m.body[0]
|
||||
assert isinstance(m.body[0], ast.ImportFrom)
|
||||
for imp in m.body[1:3]:
|
||||
assert isinstance(imp, ast.Import)
|
||||
assert imp.lineno == 2
|
||||
assert imp.col_offset == 0
|
||||
s = """'doc string'\nfrom __future__ import with_statement\nother"""
|
||||
m = rewrite(s)
|
||||
assert isinstance(m.body[0], ast.Expr)
|
||||
assert isinstance(m.body[0].value, ast.Str)
|
||||
assert isinstance(m.body[1], ast.ImportFrom)
|
||||
for imp in m.body[2:4]:
|
||||
if sys.version_info < (3, 7):
|
||||
assert isinstance(m.body[0], ast.Expr)
|
||||
assert isinstance(m.body[0].value, ast.Str)
|
||||
del m.body[0]
|
||||
assert isinstance(m.body[0], ast.ImportFrom)
|
||||
for imp in m.body[1:3]:
|
||||
assert isinstance(imp, ast.Import)
|
||||
assert imp.lineno == 3
|
||||
assert imp.col_offset == 0
|
||||
assert isinstance(m.body[4], ast.Expr)
|
||||
assert isinstance(m.body[3], ast.Expr)
|
||||
s = """from . import relative\nother_stuff"""
|
||||
m = rewrite(s)
|
||||
for imp in m.body[0:2]:
|
||||
@@ -101,10 +119,14 @@ class TestAssertionRewrite(object):
|
||||
def test_dont_rewrite(self):
|
||||
s = """'PYTEST_DONT_REWRITE'\nassert 14"""
|
||||
m = rewrite(s)
|
||||
assert len(m.body) == 2
|
||||
assert isinstance(m.body[0].value, ast.Str)
|
||||
assert isinstance(m.body[1], ast.Assert)
|
||||
assert m.body[1].msg is None
|
||||
if sys.version_info < (3, 7):
|
||||
assert len(m.body) == 2
|
||||
assert isinstance(m.body[0], ast.Expr)
|
||||
assert isinstance(m.body[0].value, ast.Str)
|
||||
del m.body[0]
|
||||
else:
|
||||
assert len(m.body) == 1
|
||||
assert m.body[0].msg is None
|
||||
|
||||
def test_name(self):
|
||||
def f():
|
||||
@@ -451,8 +473,8 @@ class TestAssertionRewrite(object):
|
||||
def test_len(self):
|
||||
|
||||
def f():
|
||||
l = list(range(10))
|
||||
assert len(l) == 11
|
||||
values = list(range(10))
|
||||
assert len(values) == 11
|
||||
|
||||
assert getmsg(f).startswith("""assert 10 == 11
|
||||
+ where 10 = len([""")
|
||||
|
||||
@@ -2,6 +2,7 @@ from __future__ import absolute_import, division, print_function
|
||||
import pytest
|
||||
import py
|
||||
|
||||
import _pytest._code
|
||||
from _pytest.main import Session, EXIT_NOTESTSCOLLECTED, _in_venv
|
||||
|
||||
|
||||
@@ -702,9 +703,9 @@ class TestNodekeywords(object):
|
||||
def test_pass(): pass
|
||||
def test_fail(): assert 0
|
||||
""")
|
||||
l = list(modcol.keywords)
|
||||
assert modcol.name in l
|
||||
for x in l:
|
||||
values = list(modcol.keywords)
|
||||
assert modcol.name in values
|
||||
for x in values:
|
||||
assert not x.startswith("_")
|
||||
assert modcol.name in repr(modcol.keywords)
|
||||
|
||||
@@ -830,3 +831,28 @@ def test_continue_on_collection_errors_maxfail(testdir):
|
||||
"*Interrupted: stopping after 3 failures*",
|
||||
"*1 failed, 2 error*",
|
||||
])
|
||||
|
||||
|
||||
def test_fixture_scope_sibling_conftests(testdir):
|
||||
"""Regression test case for https://github.com/pytest-dev/pytest/issues/2836"""
|
||||
foo_path = testdir.mkpydir("foo")
|
||||
foo_path.join("conftest.py").write(_pytest._code.Source("""
|
||||
import pytest
|
||||
@pytest.fixture
|
||||
def fix():
|
||||
return 1
|
||||
"""))
|
||||
foo_path.join("test_foo.py").write("def test_foo(fix): assert fix == 1")
|
||||
|
||||
# Tests in `food/` should not see the conftest fixture from `foo/`
|
||||
food_path = testdir.mkpydir("food")
|
||||
food_path.join("test_food.py").write("def test_food(fix): assert fix == 1")
|
||||
|
||||
res = testdir.runpytest()
|
||||
assert res.ret == 1
|
||||
|
||||
res.stdout.fnmatch_lines([
|
||||
"*ERROR at setup of test_food*",
|
||||
"E*fixture 'fix' not found",
|
||||
"*1 passed, 1 error*",
|
||||
])
|
||||
|
||||
@@ -2,7 +2,8 @@ from __future__ import absolute_import, division, print_function
|
||||
import sys
|
||||
|
||||
import pytest
|
||||
from _pytest.compat import is_generator, get_real_func
|
||||
from _pytest.compat import is_generator, get_real_func, safe_getattr
|
||||
from _pytest.outcomes import OutcomeException
|
||||
|
||||
|
||||
def test_is_generator():
|
||||
@@ -74,3 +75,27 @@ def test_is_generator_async_syntax(testdir):
|
||||
""")
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(['*1 passed*'])
|
||||
|
||||
|
||||
class ErrorsHelper(object):
|
||||
@property
|
||||
def raise_exception(self):
|
||||
raise Exception('exception should be catched')
|
||||
|
||||
@property
|
||||
def raise_fail(self):
|
||||
pytest.fail('fail should be catched')
|
||||
|
||||
|
||||
def test_helper_failures():
|
||||
helper = ErrorsHelper()
|
||||
with pytest.raises(Exception):
|
||||
helper.raise_exception
|
||||
with pytest.raises(OutcomeException):
|
||||
helper.raise_fail
|
||||
|
||||
|
||||
def test_safe_getattr():
|
||||
helper = ErrorsHelper()
|
||||
assert safe_getattr(helper, 'raise_exception', 'default') == 'default'
|
||||
assert safe_getattr(helper, 'raise_fail', 'default') == 'default'
|
||||
|
||||
@@ -123,11 +123,11 @@ class TestConfigCmdlineParsing(object):
|
||||
class TestConfigAPI(object):
|
||||
def test_config_trace(self, testdir):
|
||||
config = testdir.parseconfig()
|
||||
l = []
|
||||
config.trace.root.setwriter(l.append)
|
||||
values = []
|
||||
config.trace.root.setwriter(values.append)
|
||||
config.trace("hello")
|
||||
assert len(l) == 1
|
||||
assert l[0] == "hello [config]\n"
|
||||
assert len(values) == 1
|
||||
assert values[0] == "hello [config]\n"
|
||||
|
||||
def test_config_getoption(self, testdir):
|
||||
testdir.makeconftest("""
|
||||
@@ -209,10 +209,10 @@ class TestConfigAPI(object):
|
||||
paths=hello world/sub.py
|
||||
""")
|
||||
config = testdir.parseconfig()
|
||||
l = config.getini("paths")
|
||||
assert len(l) == 2
|
||||
assert l[0] == p.dirpath('hello')
|
||||
assert l[1] == p.dirpath('world/sub.py')
|
||||
values = config.getini("paths")
|
||||
assert len(values) == 2
|
||||
assert values[0] == p.dirpath('hello')
|
||||
assert values[1] == p.dirpath('world/sub.py')
|
||||
pytest.raises(ValueError, config.getini, 'other')
|
||||
|
||||
def test_addini_args(self, testdir):
|
||||
@@ -226,11 +226,11 @@ class TestConfigAPI(object):
|
||||
args=123 "123 hello" "this"
|
||||
""")
|
||||
config = testdir.parseconfig()
|
||||
l = config.getini("args")
|
||||
assert len(l) == 3
|
||||
assert l == ["123", "123 hello", "this"]
|
||||
l = config.getini("a2")
|
||||
assert l == list("123")
|
||||
values = config.getini("args")
|
||||
assert len(values) == 3
|
||||
assert values == ["123", "123 hello", "this"]
|
||||
values = config.getini("a2")
|
||||
assert values == list("123")
|
||||
|
||||
def test_addini_linelist(self, testdir):
|
||||
testdir.makeconftest("""
|
||||
@@ -244,11 +244,11 @@ class TestConfigAPI(object):
|
||||
second line
|
||||
""")
|
||||
config = testdir.parseconfig()
|
||||
l = config.getini("xy")
|
||||
assert len(l) == 2
|
||||
assert l == ["123 345", "second line"]
|
||||
l = config.getini("a2")
|
||||
assert l == []
|
||||
values = config.getini("xy")
|
||||
assert len(values) == 2
|
||||
assert values == ["123 345", "second line"]
|
||||
values = config.getini("a2")
|
||||
assert values == []
|
||||
|
||||
@pytest.mark.parametrize('str_val, bool_val',
|
||||
[('True', True), ('no', False), ('no-ini', True)])
|
||||
@@ -275,13 +275,13 @@ class TestConfigAPI(object):
|
||||
xy= 123
|
||||
""")
|
||||
config = testdir.parseconfig()
|
||||
l = config.getini("xy")
|
||||
assert len(l) == 1
|
||||
assert l == ["123"]
|
||||
values = config.getini("xy")
|
||||
assert len(values) == 1
|
||||
assert values == ["123"]
|
||||
config.addinivalue_line("xy", "456")
|
||||
l = config.getini("xy")
|
||||
assert len(l) == 2
|
||||
assert l == ["123", "456"]
|
||||
values = config.getini("xy")
|
||||
assert len(values) == 2
|
||||
assert values == ["123", "456"]
|
||||
|
||||
def test_addinivalue_line_new(self, testdir):
|
||||
testdir.makeconftest("""
|
||||
@@ -291,13 +291,13 @@ class TestConfigAPI(object):
|
||||
config = testdir.parseconfig()
|
||||
assert not config.getini("xy")
|
||||
config.addinivalue_line("xy", "456")
|
||||
l = config.getini("xy")
|
||||
assert len(l) == 1
|
||||
assert l == ["456"]
|
||||
values = config.getini("xy")
|
||||
assert len(values) == 1
|
||||
assert values == ["456"]
|
||||
config.addinivalue_line("xy", "123")
|
||||
l = config.getini("xy")
|
||||
assert len(l) == 2
|
||||
assert l == ["456", "123"]
|
||||
values = config.getini("xy")
|
||||
assert len(values) == 2
|
||||
assert values == ["456", "123"]
|
||||
|
||||
def test_confcutdir_check_isdir(self, testdir):
|
||||
"""Give an error if --confcutdir is not a valid directory (#2078)"""
|
||||
@@ -596,13 +596,13 @@ def test_load_initial_conftest_last_ordering(testdir):
|
||||
m = My()
|
||||
pm.register(m)
|
||||
hc = pm.hook.pytest_load_initial_conftests
|
||||
l = hc._nonwrappers + hc._wrappers
|
||||
values = hc._nonwrappers + hc._wrappers
|
||||
expected = [
|
||||
"_pytest.config",
|
||||
'test_config',
|
||||
'_pytest.capture',
|
||||
]
|
||||
assert [x.function.__module__ for x in l] == expected
|
||||
assert [x.function.__module__ for x in values] == expected
|
||||
|
||||
|
||||
def test_get_plugin_specs_as_list():
|
||||
@@ -623,17 +623,17 @@ def test_get_plugin_specs_as_list():
|
||||
class TestWarning(object):
|
||||
def test_warn_config(self, testdir):
|
||||
testdir.makeconftest("""
|
||||
l = []
|
||||
values = []
|
||||
def pytest_configure(config):
|
||||
config.warn("C1", "hello")
|
||||
def pytest_logwarning(code, message):
|
||||
if message == "hello" and code == "C1":
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
""")
|
||||
testdir.makepyfile("""
|
||||
def test_proper(pytestconfig):
|
||||
import conftest
|
||||
assert conftest.l == [1]
|
||||
assert conftest.values == [1]
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(passed=1)
|
||||
|
||||
@@ -87,8 +87,8 @@ def test_doubledash_considered(testdir):
|
||||
conf.join("conftest.py").ensure()
|
||||
conftest = PytestPluginManager()
|
||||
conftest_setinitial(conftest, [conf.basename, conf.basename])
|
||||
l = conftest._getconftestmodules(conf)
|
||||
assert len(l) == 1
|
||||
values = conftest._getconftestmodules(conf)
|
||||
assert len(values) == 1
|
||||
|
||||
|
||||
def test_issue151_load_all_conftests(testdir):
|
||||
@@ -130,28 +130,28 @@ def test_conftestcutdir(testdir):
|
||||
p = testdir.mkdir("x")
|
||||
conftest = PytestPluginManager()
|
||||
conftest_setinitial(conftest, [testdir.tmpdir], confcutdir=p)
|
||||
l = conftest._getconftestmodules(p)
|
||||
assert len(l) == 0
|
||||
l = conftest._getconftestmodules(conf.dirpath())
|
||||
assert len(l) == 0
|
||||
values = conftest._getconftestmodules(p)
|
||||
assert len(values) == 0
|
||||
values = conftest._getconftestmodules(conf.dirpath())
|
||||
assert len(values) == 0
|
||||
assert conf not in conftest._conftestpath2mod
|
||||
# but we can still import a conftest directly
|
||||
conftest._importconftest(conf)
|
||||
l = conftest._getconftestmodules(conf.dirpath())
|
||||
assert l[0].__file__.startswith(str(conf))
|
||||
values = conftest._getconftestmodules(conf.dirpath())
|
||||
assert values[0].__file__.startswith(str(conf))
|
||||
# and all sub paths get updated properly
|
||||
l = conftest._getconftestmodules(p)
|
||||
assert len(l) == 1
|
||||
assert l[0].__file__.startswith(str(conf))
|
||||
values = conftest._getconftestmodules(p)
|
||||
assert len(values) == 1
|
||||
assert values[0].__file__.startswith(str(conf))
|
||||
|
||||
|
||||
def test_conftestcutdir_inplace_considered(testdir):
|
||||
conf = testdir.makeconftest("")
|
||||
conftest = PytestPluginManager()
|
||||
conftest_setinitial(conftest, [conf.dirpath()], confcutdir=conf.dirpath())
|
||||
l = conftest._getconftestmodules(conf.dirpath())
|
||||
assert len(l) == 1
|
||||
assert l[0].__file__.startswith(str(conf))
|
||||
values = conftest._getconftestmodules(conf.dirpath())
|
||||
assert len(values) == 1
|
||||
assert values[0].__file__.startswith(str(conf))
|
||||
|
||||
|
||||
@pytest.mark.parametrize("name", 'test tests whatever .dotdir'.split())
|
||||
|
||||
@@ -173,7 +173,7 @@ class TestDoctests(object):
|
||||
"*UNEXPECTED*ZeroDivision*",
|
||||
])
|
||||
|
||||
def test_docstring_context_around_error(self, testdir):
|
||||
def test_docstring_partial_context_around_error(self, testdir):
|
||||
"""Test that we show some context before the actual line of a failing
|
||||
doctest.
|
||||
"""
|
||||
@@ -199,7 +199,7 @@ class TestDoctests(object):
|
||||
''')
|
||||
result = testdir.runpytest('--doctest-modules')
|
||||
result.stdout.fnmatch_lines([
|
||||
'*docstring_context_around_error*',
|
||||
'*docstring_partial_context_around_error*',
|
||||
'005*text-line-3',
|
||||
'006*text-line-4',
|
||||
'013*text-line-11',
|
||||
@@ -213,6 +213,32 @@ class TestDoctests(object):
|
||||
assert 'text-line-2' not in result.stdout.str()
|
||||
assert 'text-line-after' not in result.stdout.str()
|
||||
|
||||
def test_docstring_full_context_around_error(self, testdir):
|
||||
"""Test that we show the whole context before the actual line of a failing
|
||||
doctest, provided that the context is up to 10 lines long.
|
||||
"""
|
||||
testdir.makepyfile('''
|
||||
def foo():
|
||||
"""
|
||||
text-line-1
|
||||
text-line-2
|
||||
|
||||
>>> 1 + 1
|
||||
3
|
||||
"""
|
||||
''')
|
||||
result = testdir.runpytest('--doctest-modules')
|
||||
result.stdout.fnmatch_lines([
|
||||
'*docstring_full_context_around_error*',
|
||||
'003*text-line-1',
|
||||
'004*text-line-2',
|
||||
'006*>>> 1 + 1',
|
||||
'Expected:',
|
||||
' 3',
|
||||
'Got:',
|
||||
' 2',
|
||||
])
|
||||
|
||||
def test_doctest_linedata_missing(self, testdir):
|
||||
testdir.tmpdir.join('hello.py').write(_pytest._code.Source("""
|
||||
class Fun(object):
|
||||
|
||||
@@ -169,6 +169,23 @@ def test_markers_option(testdir):
|
||||
])
|
||||
|
||||
|
||||
def test_ini_markers_whitespace(testdir):
|
||||
testdir.makeini("""
|
||||
[pytest]
|
||||
markers =
|
||||
a1 : this is a whitespace marker
|
||||
""")
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
@pytest.mark.a1
|
||||
def test_markers():
|
||||
assert True
|
||||
""")
|
||||
rec = testdir.inline_run("--strict", "-m", "a1")
|
||||
rec.assertoutcome(passed=1)
|
||||
|
||||
|
||||
def test_markers_option_with_plugin_in_current_dir(testdir):
|
||||
testdir.makeconftest('pytest_plugins = "flip_flop"')
|
||||
testdir.makepyfile(flip_flop="""\
|
||||
@@ -342,6 +359,24 @@ def test_parametrized_collect_with_wrong_args(testdir):
|
||||
])
|
||||
|
||||
|
||||
def test_parametrized_with_kwargs(testdir):
|
||||
"""Test collect parametrized func with wrong number of args."""
|
||||
py_file = testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
@pytest.fixture(params=[1,2])
|
||||
def a(request):
|
||||
return request.param
|
||||
|
||||
@pytest.mark.parametrize(argnames='b', argvalues=[1, 2])
|
||||
def test_func(a, b):
|
||||
pass
|
||||
""")
|
||||
|
||||
result = testdir.runpytest(py_file)
|
||||
assert(result.ret == 0)
|
||||
|
||||
|
||||
class TestFunctional(object):
|
||||
|
||||
def test_mark_per_function(self, testdir):
|
||||
@@ -433,11 +468,11 @@ class TestFunctional(object):
|
||||
assert marker.kwargs == {'x': 1, 'y': 2, 'z': 4}
|
||||
|
||||
# test the new __iter__ interface
|
||||
l = list(marker)
|
||||
assert len(l) == 3
|
||||
assert l[0].args == ("pos0",)
|
||||
assert l[1].args == ()
|
||||
assert l[2].args == ("pos1", )
|
||||
values = list(marker)
|
||||
assert len(values) == 3
|
||||
assert values[0].args == ("pos0",)
|
||||
assert values[1].args == ()
|
||||
assert values[2].args == ("pos1", )
|
||||
|
||||
@pytest.mark.xfail(reason='unfixed')
|
||||
def test_merging_markers_deep(self, testdir):
|
||||
@@ -529,9 +564,9 @@ class TestFunctional(object):
|
||||
def test_func():
|
||||
pass
|
||||
""")
|
||||
l = reprec.getfailedcollections()
|
||||
assert len(l) == 1
|
||||
assert "TypeError" in str(l[0].longrepr)
|
||||
values = reprec.getfailedcollections()
|
||||
assert len(values) == 1
|
||||
assert "TypeError" in str(values[0].longrepr)
|
||||
|
||||
def test_mark_dynamically_in_funcarg(self, testdir):
|
||||
testdir.makeconftest("""
|
||||
@@ -540,8 +575,8 @@ class TestFunctional(object):
|
||||
def arg(request):
|
||||
request.applymarker(pytest.mark.hello)
|
||||
def pytest_terminal_summary(terminalreporter):
|
||||
l = terminalreporter.stats['passed']
|
||||
terminalreporter.writer.line("keyword: %s" % l[0].keywords)
|
||||
values = terminalreporter.stats['passed']
|
||||
terminalreporter.writer.line("keyword: %s" % values[0].keywords)
|
||||
""")
|
||||
testdir.makepyfile("""
|
||||
def test_func(arg):
|
||||
@@ -564,10 +599,10 @@ class TestFunctional(object):
|
||||
item, = items
|
||||
keywords = item.keywords
|
||||
marker = keywords['hello']
|
||||
l = list(marker)
|
||||
assert len(l) == 2
|
||||
assert l[0].args == ("pos0",)
|
||||
assert l[1].args == ("pos1",)
|
||||
values = list(marker)
|
||||
assert len(values) == 2
|
||||
assert values[0].args == ("pos0",)
|
||||
assert values[1].args == ("pos1",)
|
||||
|
||||
def test_no_marker_match_on_unmarked_names(self, testdir):
|
||||
p = testdir.makepyfile("""
|
||||
@@ -812,3 +847,15 @@ def test_legacy_transfer():
|
||||
assert fake_method.fun
|
||||
# pristine marks dont transfer
|
||||
assert fake_method.pytestmark == [pytest.mark.fun.mark]
|
||||
|
||||
|
||||
class TestMarkDecorator(object):
|
||||
|
||||
@pytest.mark.parametrize('lhs, rhs, expected', [
|
||||
(pytest.mark.foo(), pytest.mark.foo(), True),
|
||||
(pytest.mark.foo(), pytest.mark.bar(), False),
|
||||
(pytest.mark.foo(), 'bar', False),
|
||||
('foo', pytest.mark.bar(), False)
|
||||
])
|
||||
def test__eq__(self, lhs, rhs, expected):
|
||||
assert (lhs == rhs) == expected
|
||||
|
||||
18
testing/test_nodes.py
Normal file
18
testing/test_nodes.py
Normal file
@@ -0,0 +1,18 @@
|
||||
import pytest
|
||||
|
||||
from _pytest import nodes
|
||||
|
||||
|
||||
@pytest.mark.parametrize("baseid, nodeid, expected", (
|
||||
('', '', True),
|
||||
('', 'foo', True),
|
||||
('', 'foo/bar', True),
|
||||
('', 'foo/bar::TestBaz::()', True),
|
||||
('foo', 'food', False),
|
||||
('foo/bar::TestBaz::()', 'foo/bar', False),
|
||||
('foo/bar::TestBaz::()', 'foo/bar::TestBop::()', False),
|
||||
('foo/bar', 'foo/bar::TestBop::()', True),
|
||||
))
|
||||
def test_ischildnode(baseid, nodeid, expected):
|
||||
result = nodes.ischildnode(baseid, nodeid)
|
||||
assert result is expected
|
||||
@@ -8,18 +8,18 @@ def setup_module(mod):
|
||||
|
||||
def test_nose_setup(testdir):
|
||||
p = testdir.makepyfile("""
|
||||
l = []
|
||||
values = []
|
||||
from nose.tools import with_setup
|
||||
|
||||
@with_setup(lambda: l.append(1), lambda: l.append(2))
|
||||
@with_setup(lambda: values.append(1), lambda: values.append(2))
|
||||
def test_hello():
|
||||
assert l == [1]
|
||||
assert values == [1]
|
||||
|
||||
def test_world():
|
||||
assert l == [1,2]
|
||||
assert values == [1,2]
|
||||
|
||||
test_hello.setup = lambda: l.append(1)
|
||||
test_hello.teardown = lambda: l.append(2)
|
||||
test_hello.setup = lambda: values.append(1)
|
||||
test_hello.teardown = lambda: values.append(2)
|
||||
""")
|
||||
result = testdir.runpytest(p, '-p', 'nose')
|
||||
result.assert_outcomes(passed=2)
|
||||
@@ -27,15 +27,15 @@ def test_nose_setup(testdir):
|
||||
|
||||
def test_setup_func_with_setup_decorator():
|
||||
from _pytest.nose import call_optional
|
||||
l = []
|
||||
values = []
|
||||
|
||||
class A(object):
|
||||
@pytest.fixture(autouse=True)
|
||||
def f(self):
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
|
||||
call_optional(A(), "f")
|
||||
assert not l
|
||||
assert not values
|
||||
|
||||
|
||||
def test_setup_func_not_callable():
|
||||
@@ -51,24 +51,24 @@ def test_nose_setup_func(testdir):
|
||||
p = testdir.makepyfile("""
|
||||
from nose.tools import with_setup
|
||||
|
||||
l = []
|
||||
values = []
|
||||
|
||||
def my_setup():
|
||||
a = 1
|
||||
l.append(a)
|
||||
values.append(a)
|
||||
|
||||
def my_teardown():
|
||||
b = 2
|
||||
l.append(b)
|
||||
values.append(b)
|
||||
|
||||
@with_setup(my_setup, my_teardown)
|
||||
def test_hello():
|
||||
print (l)
|
||||
assert l == [1]
|
||||
print (values)
|
||||
assert values == [1]
|
||||
|
||||
def test_world():
|
||||
print (l)
|
||||
assert l == [1,2]
|
||||
print (values)
|
||||
assert values == [1,2]
|
||||
|
||||
""")
|
||||
result = testdir.runpytest(p, '-p', 'nose')
|
||||
@@ -79,18 +79,18 @@ def test_nose_setup_func_failure(testdir):
|
||||
p = testdir.makepyfile("""
|
||||
from nose.tools import with_setup
|
||||
|
||||
l = []
|
||||
values = []
|
||||
my_setup = lambda x: 1
|
||||
my_teardown = lambda x: 2
|
||||
|
||||
@with_setup(my_setup, my_teardown)
|
||||
def test_hello():
|
||||
print (l)
|
||||
assert l == [1]
|
||||
print (values)
|
||||
assert values == [1]
|
||||
|
||||
def test_world():
|
||||
print (l)
|
||||
assert l == [1,2]
|
||||
print (values)
|
||||
assert values == [1,2]
|
||||
|
||||
""")
|
||||
result = testdir.runpytest(p, '-p', 'nose')
|
||||
@@ -101,13 +101,13 @@ def test_nose_setup_func_failure(testdir):
|
||||
|
||||
def test_nose_setup_func_failure_2(testdir):
|
||||
testdir.makepyfile("""
|
||||
l = []
|
||||
values = []
|
||||
|
||||
my_setup = 1
|
||||
my_teardown = 2
|
||||
|
||||
def test_hello():
|
||||
assert l == []
|
||||
assert values == []
|
||||
|
||||
test_hello.setup = my_setup
|
||||
test_hello.teardown = my_teardown
|
||||
@@ -121,26 +121,26 @@ def test_nose_setup_partial(testdir):
|
||||
p = testdir.makepyfile("""
|
||||
from functools import partial
|
||||
|
||||
l = []
|
||||
values = []
|
||||
|
||||
def my_setup(x):
|
||||
a = x
|
||||
l.append(a)
|
||||
values.append(a)
|
||||
|
||||
def my_teardown(x):
|
||||
b = x
|
||||
l.append(b)
|
||||
values.append(b)
|
||||
|
||||
my_setup_partial = partial(my_setup, 1)
|
||||
my_teardown_partial = partial(my_teardown, 2)
|
||||
|
||||
def test_hello():
|
||||
print (l)
|
||||
assert l == [1]
|
||||
print (values)
|
||||
assert values == [1]
|
||||
|
||||
def test_world():
|
||||
print (l)
|
||||
assert l == [1,2]
|
||||
print (values)
|
||||
assert values == [1,2]
|
||||
|
||||
test_hello.setup = my_setup_partial
|
||||
test_hello.teardown = my_teardown_partial
|
||||
@@ -251,19 +251,19 @@ def test_module_level_setup(testdir):
|
||||
|
||||
def test_nose_style_setup_teardown(testdir):
|
||||
testdir.makepyfile("""
|
||||
l = []
|
||||
values = []
|
||||
|
||||
def setup_module():
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
|
||||
def teardown_module():
|
||||
del l[0]
|
||||
del values[0]
|
||||
|
||||
def test_hello():
|
||||
assert l == [1]
|
||||
assert values == [1]
|
||||
|
||||
def test_world():
|
||||
assert l == [1]
|
||||
assert values == [1]
|
||||
""")
|
||||
result = testdir.runpytest('-p', 'nose')
|
||||
result.stdout.fnmatch_lines([
|
||||
|
||||
@@ -85,23 +85,23 @@ class TestPytestPluginInteractions(object):
|
||||
|
||||
def test_configure(self, testdir):
|
||||
config = testdir.parseconfig()
|
||||
l = []
|
||||
values = []
|
||||
|
||||
class A(object):
|
||||
def pytest_configure(self, config):
|
||||
l.append(self)
|
||||
values.append(self)
|
||||
|
||||
config.pluginmanager.register(A())
|
||||
assert len(l) == 0
|
||||
assert len(values) == 0
|
||||
config._do_configure()
|
||||
assert len(l) == 1
|
||||
assert len(values) == 1
|
||||
config.pluginmanager.register(A()) # leads to a configured() plugin
|
||||
assert len(l) == 2
|
||||
assert l[0] != l[1]
|
||||
assert len(values) == 2
|
||||
assert values[0] != values[1]
|
||||
|
||||
config._ensure_unconfigure()
|
||||
config.pluginmanager.register(A())
|
||||
assert len(l) == 2
|
||||
assert len(values) == 2
|
||||
|
||||
def test_hook_tracing(self):
|
||||
pytestpm = get_config().pluginmanager # fully initialized with plugins
|
||||
@@ -116,19 +116,19 @@ class TestPytestPluginInteractions(object):
|
||||
saveindent.append(pytestpm.trace.root.indent)
|
||||
raise ValueError()
|
||||
|
||||
l = []
|
||||
pytestpm.trace.root.setwriter(l.append)
|
||||
values = []
|
||||
pytestpm.trace.root.setwriter(values.append)
|
||||
undo = pytestpm.enable_tracing()
|
||||
try:
|
||||
indent = pytestpm.trace.root.indent
|
||||
p = api1()
|
||||
pytestpm.register(p)
|
||||
assert pytestpm.trace.root.indent == indent
|
||||
assert len(l) >= 2
|
||||
assert 'pytest_plugin_registered' in l[0]
|
||||
assert 'finish' in l[1]
|
||||
assert len(values) >= 2
|
||||
assert 'pytest_plugin_registered' in values[0]
|
||||
assert 'finish' in values[1]
|
||||
|
||||
l[:] = []
|
||||
values[:] = []
|
||||
with pytest.raises(ValueError):
|
||||
pytestpm.register(api2())
|
||||
assert pytestpm.trace.root.indent == indent
|
||||
@@ -230,12 +230,12 @@ class TestPytestPluginManager(object):
|
||||
mod = py.std.types.ModuleType("x.y.pytest_hello")
|
||||
pm.register(mod)
|
||||
assert pm.is_registered(mod)
|
||||
l = pm.get_plugins()
|
||||
assert mod in l
|
||||
values = pm.get_plugins()
|
||||
assert mod in values
|
||||
pytest.raises(ValueError, "pm.register(mod)")
|
||||
pytest.raises(ValueError, lambda: pm.register(mod))
|
||||
# assert not pm.is_registered(mod2)
|
||||
assert pm.get_plugins() == l
|
||||
assert pm.get_plugins() == values
|
||||
|
||||
def test_canonical_import(self, monkeypatch):
|
||||
mod = py.std.types.ModuleType("pytest_xyz")
|
||||
@@ -269,8 +269,8 @@ class TestPytestPluginManager(object):
|
||||
|
||||
# check that it is not registered twice
|
||||
pytestpm.consider_module(mod)
|
||||
l = reprec.getcalls("pytest_plugin_registered")
|
||||
assert len(l) == 1
|
||||
values = reprec.getcalls("pytest_plugin_registered")
|
||||
assert len(values) == 1
|
||||
|
||||
def test_consider_env_fails_to_import(self, monkeypatch, pytestpm):
|
||||
monkeypatch.setenv('PYTEST_PLUGINS', 'nonexisting', prepend=",")
|
||||
|
||||
@@ -30,10 +30,10 @@ class TestWarningsRecorderChecker(object):
|
||||
assert len(rec.list) == 2
|
||||
warn = rec.pop()
|
||||
assert str(warn.message) == "hello"
|
||||
l = rec.list
|
||||
values = rec.list
|
||||
rec.clear()
|
||||
assert len(rec.list) == 0
|
||||
assert l is rec.list
|
||||
assert values is rec.list
|
||||
pytest.raises(AssertionError, "rec.pop()")
|
||||
|
||||
def test_typechecking(self):
|
||||
|
||||
@@ -13,12 +13,12 @@ class TestSetupState(object):
|
||||
def test_setup(self, testdir):
|
||||
ss = runner.SetupState()
|
||||
item = testdir.getitem("def test_func(): pass")
|
||||
l = [1]
|
||||
values = [1]
|
||||
ss.prepare(item)
|
||||
ss.addfinalizer(l.pop, colitem=item)
|
||||
assert l
|
||||
ss.addfinalizer(values.pop, colitem=item)
|
||||
assert values
|
||||
ss._pop_and_teardown()
|
||||
assert not l
|
||||
assert not values
|
||||
|
||||
def test_teardown_exact_stack_empty(self, testdir):
|
||||
item = testdir.getitem("def test_func(): pass")
|
||||
|
||||
@@ -39,20 +39,20 @@ def test_module_and_function_setup(testdir):
|
||||
|
||||
def test_module_setup_failure_no_teardown(testdir):
|
||||
reprec = testdir.inline_runsource("""
|
||||
l = []
|
||||
values = []
|
||||
def setup_module(module):
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
0/0
|
||||
|
||||
def test_nothing():
|
||||
pass
|
||||
|
||||
def teardown_module(module):
|
||||
l.append(2)
|
||||
values.append(2)
|
||||
""")
|
||||
reprec.assertoutcome(failed=1)
|
||||
calls = reprec.getcalls("pytest_runtest_setup")
|
||||
assert calls[0].item.module.l == [1]
|
||||
assert calls[0].item.module.values == [1]
|
||||
|
||||
|
||||
def test_setup_function_failure_no_teardown(testdir):
|
||||
|
||||
@@ -45,9 +45,9 @@ class SessionTests(object):
|
||||
a = 1
|
||||
""")
|
||||
reprec = testdir.inline_run(tfile)
|
||||
l = reprec.getfailedcollections()
|
||||
assert len(l) == 1
|
||||
out = str(l[0].longrepr)
|
||||
values = reprec.getfailedcollections()
|
||||
assert len(values) == 1
|
||||
out = str(values[0].longrepr)
|
||||
assert out.find('does_not_work') != -1
|
||||
|
||||
def test_raises_output(self, testdir):
|
||||
@@ -75,9 +75,9 @@ class SessionTests(object):
|
||||
|
||||
def test_syntax_error_module(self, testdir):
|
||||
reprec = testdir.inline_runsource("this is really not python")
|
||||
l = reprec.getfailedcollections()
|
||||
assert len(l) == 1
|
||||
out = str(l[0].longrepr)
|
||||
values = reprec.getfailedcollections()
|
||||
assert len(values) == 1
|
||||
out = str(values[0].longrepr)
|
||||
assert out.find(str('not python')) != -1
|
||||
|
||||
def test_exit_first_problem(self, testdir):
|
||||
@@ -144,15 +144,15 @@ class TestNewSession(SessionTests):
|
||||
|
||||
def test_order_of_execution(self, testdir):
|
||||
reprec = testdir.inline_runsource("""
|
||||
l = []
|
||||
values = []
|
||||
def test_1():
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
def test_2():
|
||||
l.append(2)
|
||||
values.append(2)
|
||||
def test_3():
|
||||
assert l == [1,2]
|
||||
assert values == [1,2]
|
||||
class Testmygroup(object):
|
||||
reslist = l
|
||||
reslist = values
|
||||
def test_1(self):
|
||||
self.reslist.append(1)
|
||||
def test_2(self):
|
||||
@@ -242,13 +242,13 @@ def test_exclude(testdir):
|
||||
def test_sessionfinish_with_start(testdir):
|
||||
testdir.makeconftest("""
|
||||
import os
|
||||
l = []
|
||||
values = []
|
||||
def pytest_sessionstart():
|
||||
l.append(os.getcwd())
|
||||
values.append(os.getcwd())
|
||||
os.chdir("..")
|
||||
|
||||
def pytest_sessionfinish():
|
||||
assert l[0] == os.getcwd()
|
||||
assert values[0] == os.getcwd()
|
||||
|
||||
""")
|
||||
res = testdir.runpytest("--collect-only")
|
||||
|
||||
@@ -679,9 +679,9 @@ def test_skip_reasons_folding():
|
||||
ev2.longrepr = longrepr
|
||||
ev2.skipped = True
|
||||
|
||||
l = folded_skips([ev1, ev2])
|
||||
assert len(l) == 1
|
||||
num, fspath, lineno, reason = l[0]
|
||||
values = folded_skips([ev1, ev2])
|
||||
assert len(values) == 1
|
||||
num, fspath, lineno, reason = values[0]
|
||||
assert num == 2
|
||||
assert fspath == path
|
||||
assert lineno == lineno
|
||||
|
||||
@@ -24,12 +24,12 @@ class Option(object):
|
||||
|
||||
@property
|
||||
def args(self):
|
||||
l = []
|
||||
values = []
|
||||
if self.verbose:
|
||||
l.append('-v')
|
||||
values.append('-v')
|
||||
if self.fulltrace:
|
||||
l.append('--fulltrace')
|
||||
return l
|
||||
values.append('--fulltrace')
|
||||
return values
|
||||
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
|
||||
@@ -73,19 +73,19 @@ def test_setup(testdir):
|
||||
|
||||
def test_setUpModule(testdir):
|
||||
testpath = testdir.makepyfile("""
|
||||
l = []
|
||||
values = []
|
||||
|
||||
def setUpModule():
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
|
||||
def tearDownModule():
|
||||
del l[0]
|
||||
del values[0]
|
||||
|
||||
def test_hello():
|
||||
assert l == [1]
|
||||
assert values == [1]
|
||||
|
||||
def test_world():
|
||||
assert l == [1]
|
||||
assert values == [1]
|
||||
""")
|
||||
result = testdir.runpytest(testpath)
|
||||
result.stdout.fnmatch_lines([
|
||||
@@ -95,13 +95,13 @@ def test_setUpModule(testdir):
|
||||
|
||||
def test_setUpModule_failing_no_teardown(testdir):
|
||||
testpath = testdir.makepyfile("""
|
||||
l = []
|
||||
values = []
|
||||
|
||||
def setUpModule():
|
||||
0/0
|
||||
|
||||
def tearDownModule():
|
||||
l.append(1)
|
||||
values.append(1)
|
||||
|
||||
def test_hello():
|
||||
pass
|
||||
@@ -109,7 +109,7 @@ def test_setUpModule_failing_no_teardown(testdir):
|
||||
reprec = testdir.inline_run(testpath)
|
||||
reprec.assertoutcome(passed=0, failed=1)
|
||||
call = reprec.getcalls("pytest_runtest_setup")[0]
|
||||
assert not call.item.module.l
|
||||
assert not call.item.module.values
|
||||
|
||||
|
||||
def test_new_instances(testdir):
|
||||
@@ -129,14 +129,14 @@ def test_teardown(testdir):
|
||||
testpath = testdir.makepyfile("""
|
||||
import unittest
|
||||
class MyTestCase(unittest.TestCase):
|
||||
l = []
|
||||
values = []
|
||||
def test_one(self):
|
||||
pass
|
||||
def tearDown(self):
|
||||
self.l.append(None)
|
||||
self.values.append(None)
|
||||
class Second(unittest.TestCase):
|
||||
def test_check(self):
|
||||
self.assertEqual(MyTestCase.l, [None])
|
||||
self.assertEqual(MyTestCase.values, [None])
|
||||
""")
|
||||
reprec = testdir.inline_run(testpath)
|
||||
passed, skipped, failed = reprec.countoutcomes()
|
||||
|
||||
@@ -144,6 +144,8 @@ def test_unicode(testdir, pyfile_with_warnings):
|
||||
@pytest.mark.skipif(sys.version_info >= (3, 0),
|
||||
reason='warnings message is broken as it is not str instance')
|
||||
def test_py2_unicode(testdir, pyfile_with_warnings):
|
||||
if getattr(sys, "pypy_version_info", ())[:2] == (5, 9) and sys.platform.startswith('win'):
|
||||
pytest.xfail("fails with unicode error on PyPy2 5.9 and Windows (#2905)")
|
||||
testdir.makepyfile('''
|
||||
# -*- coding: utf8 -*-
|
||||
import warnings
|
||||
|
||||
23
tox.ini
23
tox.ini
@@ -12,10 +12,10 @@ envlist =
|
||||
py36
|
||||
py37
|
||||
pypy
|
||||
{py27,py35}-{pexpect,xdist,trial,numpy}
|
||||
{py27,py36}-{pexpect,xdist,trial,numpy}
|
||||
py27-nobyte
|
||||
doctesting
|
||||
freeze
|
||||
py35-freeze
|
||||
docs
|
||||
|
||||
[testenv]
|
||||
@@ -37,7 +37,6 @@ deps =
|
||||
|
||||
[testenv:py27-subprocess]
|
||||
changedir = .
|
||||
basepython = python2.7
|
||||
deps =
|
||||
pytest-xdist>=1.13
|
||||
mock
|
||||
@@ -65,10 +64,11 @@ deps =
|
||||
mock
|
||||
nose
|
||||
hypothesis>=3.5.2
|
||||
changedir=testing
|
||||
commands =
|
||||
pytest -n1 -rfsxX {posargs:testing}
|
||||
pytest -n1 -rfsxX {posargs:.}
|
||||
|
||||
[testenv:py35-xdist]
|
||||
[testenv:py36-xdist]
|
||||
deps = {[testenv:py27-xdist]deps}
|
||||
commands =
|
||||
pytest -n3 -rfsxX {posargs:testing}
|
||||
@@ -80,7 +80,7 @@ deps = pexpect
|
||||
commands =
|
||||
pytest -rfsxX test_pdb.py test_terminal.py test_unittest.py
|
||||
|
||||
[testenv:py35-pexpect]
|
||||
[testenv:py36-pexpect]
|
||||
changedir = testing
|
||||
platform = linux|darwin
|
||||
deps = {[testenv:py27-pexpect]deps}
|
||||
@@ -92,17 +92,18 @@ deps =
|
||||
pytest-xdist>=1.13
|
||||
hypothesis>=3.5.2
|
||||
distribute = true
|
||||
changedir=testing
|
||||
setenv =
|
||||
PYTHONDONTWRITEBYTECODE=1
|
||||
commands =
|
||||
pytest -n3 -rfsxX {posargs:testing}
|
||||
pytest -n3 -rfsxX {posargs:.}
|
||||
|
||||
[testenv:py27-trial]
|
||||
deps = twisted
|
||||
commands =
|
||||
pytest -ra {posargs:testing/test_unittest.py}
|
||||
|
||||
[testenv:py35-trial]
|
||||
[testenv:py36-trial]
|
||||
deps = {[testenv:py27-trial]deps}
|
||||
commands =
|
||||
pytest -ra {posargs:testing/test_unittest.py}
|
||||
@@ -112,7 +113,7 @@ deps=numpy
|
||||
commands=
|
||||
pytest -rfsxX {posargs:testing/python/approx.py}
|
||||
|
||||
[testenv:py35-numpy]
|
||||
[testenv:py36-numpy]
|
||||
deps=numpy
|
||||
commands=
|
||||
pytest -rfsxX {posargs:testing/python/approx.py}
|
||||
@@ -169,7 +170,7 @@ changedir = testing
|
||||
commands =
|
||||
{envpython} {envbindir}/py.test-jython -rfsxX {posargs}
|
||||
|
||||
[testenv:freeze]
|
||||
[testenv:py35-freeze]
|
||||
changedir = testing/freeze
|
||||
deps = pyinstaller
|
||||
commands =
|
||||
@@ -180,7 +181,6 @@ commands =
|
||||
[testenv:coveralls]
|
||||
passenv = TRAVIS TRAVIS_JOB_ID TRAVIS_BRANCH COVERALLS_REPO_TOKEN
|
||||
usedevelop = True
|
||||
basepython = python3.5
|
||||
changedir = .
|
||||
deps =
|
||||
{[testenv]deps}
|
||||
@@ -200,6 +200,7 @@ python_files = test_*.py *_test.py testing/*/*.py
|
||||
python_classes = Test Acceptance
|
||||
python_functions = test
|
||||
norecursedirs = .tox ja .hg cx_freeze_source
|
||||
xfail_strict=true
|
||||
filterwarnings =
|
||||
error
|
||||
# produced by path.local
|
||||
|
||||
Reference in New Issue
Block a user