Merge branch 'master' into typo_for_parametrize

This commit is contained in:
Alan Velasco 2018-01-30 16:22:54 -06:00 committed by GitHub
commit e12a588c39
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
70 changed files with 2123 additions and 804 deletions

1
.gitignore vendored
View File

@ -33,6 +33,7 @@ env/
3rdparty/ 3rdparty/
.tox .tox
.cache .cache
.pytest_cache
.coverage .coverage
.ropeproject .ropeproject
.idea .idea

View File

@ -3,12 +3,15 @@ merlinux GmbH, Germany, office at merlinux eu
Contributors include:: Contributors include::
Aaron Coleman
Abdeali JK Abdeali JK
Abhijeet Kasurde Abhijeet Kasurde
Ahn Ki-Wook Ahn Ki-Wook
Alan Velasco
Alexander Johnson Alexander Johnson
Alexei Kozlenok Alexei Kozlenok
Anatoly Bubenkoff Anatoly Bubenkoff
Anders Hovmöller
Andras Tim Andras Tim
Andreas Zeidler Andreas Zeidler
Andrzej Ostrowski Andrzej Ostrowski
@ -17,6 +20,7 @@ Anthon van der Neut
Anthony Sottile Anthony Sottile
Antony Lee Antony Lee
Armin Rigo Armin Rigo
Aron Coyle
Aron Curzon Aron Curzon
Aviv Palivoda Aviv Palivoda
Barney Gale Barney Gale
@ -150,6 +154,7 @@ Punyashloka Biswal
Quentin Pradet Quentin Pradet
Ralf Schmitt Ralf Schmitt
Ran Benita Ran Benita
Raphael Castaneda
Raphael Pierzina Raphael Pierzina
Raquel Alegre Raquel Alegre
Ravi Chandra Ravi Chandra

View File

@ -8,6 +8,138 @@
.. towncrier release notes start .. towncrier release notes start
Pytest 3.4.0 (2018-01-30)
=========================
Deprecations and Removals
-------------------------
- All pytest classes now subclass ``object`` for better Python 2/3 compatibility.
This should not affect user code except in very rare edge cases. (`#2147
<https://github.com/pytest-dev/pytest/issues/2147>`_)
Features
--------
- Introduce ``empty_parameter_set_mark`` ini option to select which mark to
apply when ``@pytest.mark.parametrize`` is given an empty set of parameters.
Valid options are ``skip`` (default) and ``xfail``. Note that it is planned
to change the default to ``xfail`` in future releases as this is considered
less error prone. (`#2527
<https://github.com/pytest-dev/pytest/issues/2527>`_)
- **Incompatible change**: after community feedback the `logging
<https://docs.pytest.org/en/latest/logging.html>`_ functionality has
undergone some changes. Please consult the `logging documentation
<https://docs.pytest.org/en/latest/logging.html#incompatible-changes-in-pytest-3-4>`_
for details. (`#3013 <https://github.com/pytest-dev/pytest/issues/3013>`_)
- Console output falls back to "classic" mode when capturing is disabled (``-s``),
otherwise the output gets garbled to the point of being useless. (`#3038
<https://github.com/pytest-dev/pytest/issues/3038>`_)
- New `pytest_runtest_logfinish
<https://docs.pytest.org/en/latest/writing_plugins.html#_pytest.hookspec.pytest_runtest_logfinish>`_
hook which is called when a test item has finished executing, analogous to
`pytest_runtest_logstart
<https://docs.pytest.org/en/latest/writing_plugins.html#_pytest.hookspec.pytest_runtest_start>`_.
(`#3101 <https://github.com/pytest-dev/pytest/issues/3101>`_)
- Improve performance when collecting tests using many fixtures. (`#3107
<https://github.com/pytest-dev/pytest/issues/3107>`_)
- New ``caplog.get_records(when)`` method which provides access to the captured
records for the ``"setup"``, ``"call"`` and ``"teardown"``
testing stages. (`#3117 <https://github.com/pytest-dev/pytest/issues/3117>`_)
- New fixture ``record_xml_attribute`` that allows modifying and inserting
attributes on the ``<testcase>`` xml node in JUnit reports. (`#3130
<https://github.com/pytest-dev/pytest/issues/3130>`_)
- The default cache directory has been renamed from ``.cache`` to
``.pytest_cache`` after community feedback that the name ``.cache`` did not
make it clear that it was used by pytest. (`#3138
<https://github.com/pytest-dev/pytest/issues/3138>`_)
- Colorize the levelname column in the live-log output. (`#3142
<https://github.com/pytest-dev/pytest/issues/3142>`_)
Bug Fixes
---------
- Fix hanging pexpect test on MacOS by using flush() instead of wait().
(`#2022 <https://github.com/pytest-dev/pytest/issues/2022>`_)
- Fix restoring Python state after in-process pytest runs with the
``pytester`` plugin; this may break tests using multiple inprocess
pytest runs if later ones depend on earlier ones leaking global interpreter
changes. (`#3016 <https://github.com/pytest-dev/pytest/issues/3016>`_)
- Fix skipping plugin reporting hook when test aborted before plugin setup
hook. (`#3074 <https://github.com/pytest-dev/pytest/issues/3074>`_)
- Fix progress percentage reported when tests fail during teardown. (`#3088
<https://github.com/pytest-dev/pytest/issues/3088>`_)
- **Incompatible change**: ``-o/--override`` option no longer eats all the
remaining options, which can lead to surprising behavior: for example,
``pytest -o foo=1 /path/to/test.py`` would fail because ``/path/to/test.py``
would be considered as part of the ``-o`` command-line argument. One
consequence of this is that now multiple configuration overrides need
multiple ``-o`` flags: ``pytest -o foo=1 -o bar=2``. (`#3103
<https://github.com/pytest-dev/pytest/issues/3103>`_)
Improved Documentation
----------------------
- Document hooks (defined with ``historic=True``) which cannot be used with
``hookwrapper=True``. (`#2423
<https://github.com/pytest-dev/pytest/issues/2423>`_)
- Clarify that warning capturing doesn't change the warning filter by default.
(`#2457 <https://github.com/pytest-dev/pytest/issues/2457>`_)
- Clarify a possible confusion when using pytest_fixture_setup with fixture
functions that return None. (`#2698
<https://github.com/pytest-dev/pytest/issues/2698>`_)
- Fix the wording of a sentence on doctest flags used in pytest. (`#3076
<https://github.com/pytest-dev/pytest/issues/3076>`_)
- Prefer ``https://*.readthedocs.io`` over ``http://*.rtfd.org`` for links in
the documentation. (`#3092
<https://github.com/pytest-dev/pytest/issues/3092>`_)
- Improve readability (wording, grammar) of Getting Started guide (`#3131
<https://github.com/pytest-dev/pytest/issues/3131>`_)
- Added note that calling pytest.main multiple times from the same process is
not recommended because of import caching. (`#3143
<https://github.com/pytest-dev/pytest/issues/3143>`_)
Trivial/Internal Changes
------------------------
- Show a simple and easy error when keyword expressions trigger a syntax error
(for example, ``"-k foo and import"`` will show an error that you can not use
the ``import`` keyword in expressions). (`#2953
<https://github.com/pytest-dev/pytest/issues/2953>`_)
- Change parametrized automatic test id generation to use the ``__name__``
attribute of functions instead of the fallback argument name plus counter.
(`#2976 <https://github.com/pytest-dev/pytest/issues/2976>`_)
- Replace py.std with stdlib imports. (`#3067
<https://github.com/pytest-dev/pytest/issues/3067>`_)
- Corrected 'you' to 'your' in logging docs. (`#3129
<https://github.com/pytest-dev/pytest/issues/3129>`_)
Pytest 3.3.2 (2017-12-25) Pytest 3.3.2 (2017-12-25)
========================= =========================

View File

@ -12,7 +12,7 @@ taking a lot of time to make a new one.
#. Install development dependencies in a virtual environment with:: #. Install development dependencies in a virtual environment with::
pip3 install -r tasks/requirements.txt pip3 install -U -r tasks/requirements.txt
#. Create a branch ``release-X.Y.Z`` with the version for the release. #. Create a branch ``release-X.Y.Z`` with the version for the release.

View File

@ -60,7 +60,7 @@ import os
from glob import glob from glob import glob
class FastFilesCompleter: class FastFilesCompleter(object):
'Fast file completer class' 'Fast file completer class'
def __init__(self, directories=True): def __init__(self, directories=True):

View File

@ -56,7 +56,7 @@ class DummyRewriteHook(object):
pass pass
class AssertionState: class AssertionState(object):
"""State for the assertion plugin.""" """State for the assertion plugin."""
def __init__(self, config, mode): def __init__(self, config, mode):

View File

@ -17,7 +17,7 @@ class Cache(object):
self.config = config self.config = config
self._cachedir = Cache.cache_dir_from_config(config) self._cachedir = Cache.cache_dir_from_config(config)
self.trace = config.trace.root.get("cache") self.trace = config.trace.root.get("cache")
if config.getvalue("cacheclear"): if config.getoption("cacheclear"):
self.trace("clearing cachedir") self.trace("clearing cachedir")
if self._cachedir.check(): if self._cachedir.check():
self._cachedir.remove() self._cachedir.remove()
@ -98,13 +98,13 @@ class Cache(object):
json.dump(value, f, indent=2, sort_keys=True) json.dump(value, f, indent=2, sort_keys=True)
class LFPlugin: class LFPlugin(object):
""" Plugin which implements the --lf (run last-failing) option """ """ Plugin which implements the --lf (run last-failing) option """
def __init__(self, config): def __init__(self, config):
self.config = config self.config = config
active_keys = 'lf', 'failedfirst' active_keys = 'lf', 'failedfirst'
self.active = any(config.getvalue(key) for key in active_keys) self.active = any(config.getoption(key) for key in active_keys)
self.lastfailed = config.cache.get("cache/lastfailed", {}) self.lastfailed = config.cache.get("cache/lastfailed", {})
self._previously_failed_count = None self._previously_failed_count = None
@ -114,7 +114,8 @@ class LFPlugin:
mode = "run all (no recorded failures)" mode = "run all (no recorded failures)"
else: else:
noun = 'failure' if self._previously_failed_count == 1 else 'failures' noun = 'failure' if self._previously_failed_count == 1 else 'failures'
suffix = " first" if self.config.getvalue("failedfirst") else "" suffix = " first" if self.config.getoption(
"failedfirst") else ""
mode = "rerun previous {count} {noun}{suffix}".format( mode = "rerun previous {count} {noun}{suffix}".format(
count=self._previously_failed_count, suffix=suffix, noun=noun count=self._previously_failed_count, suffix=suffix, noun=noun
) )
@ -151,7 +152,7 @@ class LFPlugin:
# running a subset of all tests with recorded failures outside # running a subset of all tests with recorded failures outside
# of the set of tests currently executing # of the set of tests currently executing
return return
if self.config.getvalue("lf"): if self.config.getoption("lf"):
items[:] = previously_failed items[:] = previously_failed
config.hook.pytest_deselected(items=previously_passed) config.hook.pytest_deselected(items=previously_passed)
else: else:
@ -159,7 +160,7 @@ class LFPlugin:
def pytest_sessionfinish(self, session): def pytest_sessionfinish(self, session):
config = self.config config = self.config
if config.getvalue("cacheshow") or hasattr(config, "slaveinput"): if config.getoption("cacheshow") or hasattr(config, "slaveinput"):
return return
saved_lastfailed = config.cache.get("cache/lastfailed", {}) saved_lastfailed = config.cache.get("cache/lastfailed", {})
@ -185,7 +186,7 @@ def pytest_addoption(parser):
'--cache-clear', action='store_true', dest="cacheclear", '--cache-clear', action='store_true', dest="cacheclear",
help="remove all cache contents at start of test run.") help="remove all cache contents at start of test run.")
parser.addini( parser.addini(
"cache_dir", default='.cache', "cache_dir", default='.pytest_cache',
help="cache directory path.") help="cache directory path.")

View File

@ -61,7 +61,7 @@ def pytest_load_initial_conftests(early_config, parser, args):
sys.stderr.write(err) sys.stderr.write(err)
class CaptureManager: class CaptureManager(object):
""" """
Capture plugin, manages that the appropriate capture method is enabled/disabled during collection and each Capture plugin, manages that the appropriate capture method is enabled/disabled during collection and each
test phase (setup, call, teardown). After each of those points, the captured output is obtained and test phase (setup, call, teardown). After each of those points, the captured output is obtained and
@ -271,7 +271,7 @@ def _install_capture_fixture_on_item(request, capture_class):
del request.node._capture_fixture del request.node._capture_fixture
class CaptureFixture: class CaptureFixture(object):
def __init__(self, captureclass, request): def __init__(self, captureclass, request):
self.captureclass = captureclass self.captureclass = captureclass
self.request = request self.request = request
@ -416,11 +416,11 @@ class MultiCapture(object):
self.err.snap() if self.err is not None else "") self.err.snap() if self.err is not None else "")
class NoCapture: class NoCapture(object):
__init__ = start = done = suspend = resume = lambda *args: None __init__ = start = done = suspend = resume = lambda *args: None
class FDCaptureBinary: class FDCaptureBinary(object):
"""Capture IO to/from a given os-level filedescriptor. """Capture IO to/from a given os-level filedescriptor.
snap() produces `bytes` snap() produces `bytes`
@ -506,7 +506,7 @@ class FDCapture(FDCaptureBinary):
return res return res
class SysCapture: class SysCapture(object):
def __init__(self, fd, tmpfile=None): def __init__(self, fd, tmpfile=None):
name = patchsysdict[fd] name = patchsysdict[fd]
self._old = getattr(sys, name) self._old = getattr(sys, name)
@ -551,7 +551,7 @@ class SysCaptureBinary(SysCapture):
return res return res
class DontReadFromInput: class DontReadFromInput(object):
"""Temporary stub class. Ideally when stdin is accessed, the """Temporary stub class. Ideally when stdin is accessed, the
capturing should be turned off, with possibly all data captured capturing should be turned off, with possibly all data captured
so far sent to the screen. This should be configurable, though, so far sent to the screen. This should be configurable, though,

View File

@ -60,12 +60,13 @@ def main(args=None, plugins=None):
finally: finally:
config._ensure_unconfigure() config._ensure_unconfigure()
except UsageError as e: except UsageError as e:
tw = py.io.TerminalWriter(sys.stderr)
for msg in e.args: for msg in e.args:
sys.stderr.write("ERROR: %s\n" % (msg,)) tw.line("ERROR: {}\n".format(msg), red=True)
return 4 return 4
class cmdline: # compatibility namespace class cmdline(object): # compatibility namespace
main = staticmethod(main) main = staticmethod(main)
@ -462,7 +463,7 @@ def _get_plugin_specs_as_list(specs):
return [] return []
class Parser: class Parser(object):
""" Parser for command line arguments and ini-file values. """ Parser for command line arguments and ini-file values.
:ivar extra_info: dict of generic param -> value to display in case :ivar extra_info: dict of generic param -> value to display in case
@ -597,7 +598,7 @@ class ArgumentError(Exception):
return self.msg return self.msg
class Argument: class Argument(object):
"""class that mimics the necessary behaviour of optparse.Option """class that mimics the necessary behaviour of optparse.Option
its currently a least effort implementation its currently a least effort implementation
@ -727,7 +728,7 @@ class Argument:
return 'Argument({0})'.format(', '.join(args)) return 'Argument({0})'.format(', '.join(args))
class OptionGroup: class OptionGroup(object):
def __init__(self, name, description="", parser=None): def __init__(self, name, description="", parser=None):
self.name = name self.name = name
self.description = description self.description = description
@ -858,7 +859,7 @@ class CmdOptions(object):
return CmdOptions(self.__dict__) return CmdOptions(self.__dict__)
class Notset: class Notset(object):
def __repr__(self): def __repr__(self):
return "<NOTSET>" return "<NOTSET>"
@ -1187,16 +1188,15 @@ class Config(object):
def _get_override_ini_value(self, name): def _get_override_ini_value(self, name):
value = None value = None
# override_ini is a list of list, to support both -o foo1=bar1 foo2=bar2 and # override_ini is a list of "ini=value" options
# and -o foo1=bar1 -o foo2=bar2 options # always use the last item if multiple values are set for same ini-name,
# always use the last item if multiple value set for same ini-name,
# e.g. -o foo=bar1 -o foo=bar2 will set foo to bar2 # e.g. -o foo=bar1 -o foo=bar2 will set foo to bar2
for ini_config_list in self._override_ini: for ini_config in self._override_ini:
for ini_config in ini_config_list: try:
try: key, user_ini_value = ini_config.split("=", 1)
(key, user_ini_value) = ini_config.split("=", 1) except ValueError:
except ValueError: raise UsageError("-o/--override-ini expects option=value style.")
raise UsageError("-o/--override-ini expects option=value style.") else:
if key == name: if key == name:
value = user_ini_value value = user_ini_value
return value return value

View File

@ -40,7 +40,7 @@ def pytest_configure(config):
config._cleanup.append(fin) config._cleanup.append(fin)
class pytestPDB: class pytestPDB(object):
""" Pseudo PDB that defers to the real pdb. """ """ Pseudo PDB that defers to the real pdb. """
_pluginmanager = None _pluginmanager = None
_config = None _config = None
@ -62,7 +62,7 @@ class pytestPDB:
cls._pdb_cls().set_trace(frame) cls._pdb_cls().set_trace(frame)
class PdbInvoke: class PdbInvoke(object):
def pytest_exception_interact(self, node, call, report): def pytest_exception_interact(self, node, call, report):
capman = node.config.pluginmanager.getplugin("capturemanager") capman = node.config.pluginmanager.getplugin("capturemanager")
if capman: if capman:

View File

@ -4,7 +4,7 @@ import functools
import inspect import inspect
import sys import sys
import warnings import warnings
from collections import OrderedDict from collections import OrderedDict, deque, defaultdict
import attr import attr
import py import py
@ -26,11 +26,12 @@ from _pytest.outcomes import fail, TEST_OUTCOME
def pytest_sessionstart(session): def pytest_sessionstart(session):
import _pytest.python import _pytest.python
import _pytest.nodes
scopename2class.update({ scopename2class.update({
'class': _pytest.python.Class, 'class': _pytest.python.Class,
'module': _pytest.python.Module, 'module': _pytest.python.Module,
'function': _pytest.main.Item, 'function': _pytest.nodes.Item,
'session': _pytest.main.Session, 'session': _pytest.main.Session,
}) })
session._fixturemanager = FixtureManager(session) session._fixturemanager = FixtureManager(session)
@ -162,62 +163,51 @@ def get_parametrized_fixture_keys(item, scopenum):
def reorder_items(items): def reorder_items(items):
argkeys_cache = {} argkeys_cache = {}
items_by_argkey = {}
for scopenum in range(0, scopenum_function): for scopenum in range(0, scopenum_function):
argkeys_cache[scopenum] = d = {} argkeys_cache[scopenum] = d = {}
items_by_argkey[scopenum] = item_d = defaultdict(list)
for item in items: for item in items:
keys = OrderedDict.fromkeys(get_parametrized_fixture_keys(item, scopenum)) keys = OrderedDict.fromkeys(get_parametrized_fixture_keys(item, scopenum))
if keys: if keys:
d[item] = keys d[item] = keys
return reorder_items_atscope(items, set(), argkeys_cache, 0) for key in keys:
item_d[key].append(item)
items = OrderedDict.fromkeys(items)
return list(reorder_items_atscope(items, set(), argkeys_cache, items_by_argkey, 0))
def reorder_items_atscope(items, ignore, argkeys_cache, scopenum): def reorder_items_atscope(items, ignore, argkeys_cache, items_by_argkey, scopenum):
if scopenum >= scopenum_function or len(items) < 3: if scopenum >= scopenum_function or len(items) < 3:
return items return items
items_done = [] items_deque = deque(items)
while 1: items_done = OrderedDict()
items_before, items_same, items_other, newignore = \ scoped_items_by_argkey = items_by_argkey[scopenum]
slice_items(items, ignore, argkeys_cache[scopenum]) scoped_argkeys_cache = argkeys_cache[scopenum]
items_before = reorder_items_atscope( while items_deque:
items_before, ignore, argkeys_cache, scopenum + 1) no_argkey_group = OrderedDict()
if items_same is None: slicing_argkey = None
# nothing to reorder in this scope while items_deque:
assert items_other is None item = items_deque.popleft()
return items_done + items_before if item in items_done or item in no_argkey_group:
items_done.extend(items_before) continue
items = items_same + items_other argkeys = OrderedDict.fromkeys(k for k in scoped_argkeys_cache.get(item, []) if k not in ignore)
ignore = newignore if not argkeys:
no_argkey_group[item] = None
else:
def slice_items(items, ignore, scoped_argkeys_cache): slicing_argkey, _ = argkeys.popitem()
# we pick the first item which uses a fixture instance in the # we don't have to remove relevant items from later in the deque because they'll just be ignored
# requested scope and which we haven't seen yet. We slice the input for i in reversed(scoped_items_by_argkey[slicing_argkey]):
# items list into a list of items_nomatch, items_same and if i in items:
# items_other items_deque.appendleft(i)
if scoped_argkeys_cache: # do we need to do work at all? break
it = iter(items) if no_argkey_group:
# first find a slicing key no_argkey_group = reorder_items_atscope(
for i, item in enumerate(it): no_argkey_group, set(), argkeys_cache, items_by_argkey, scopenum + 1)
argkeys = scoped_argkeys_cache.get(item) for item in no_argkey_group:
if argkeys is not None: items_done[item] = None
newargkeys = OrderedDict.fromkeys(k for k in argkeys if k not in ignore) ignore.add(slicing_argkey)
if newargkeys: # found a slicing key return items_done
slicing_argkey, _ = newargkeys.popitem()
items_before = items[:i]
items_same = [item]
items_other = []
# now slice the remainder of the list
for item in it:
argkeys = scoped_argkeys_cache.get(item)
if argkeys and slicing_argkey in argkeys and \
slicing_argkey not in ignore:
items_same.append(item)
else:
items_other.append(item)
newignore = ignore.copy()
newignore.add(slicing_argkey)
return (items_before, items_same, items_other, newignore)
return items, None, None, None
def fillfixtures(function): def fillfixtures(function):
@ -246,7 +236,7 @@ def get_direct_param_fixture_func(request):
return request.param return request.param
class FuncFixtureInfo: class FuncFixtureInfo(object):
def __init__(self, argnames, names_closure, name2fixturedefs): def __init__(self, argnames, names_closure, name2fixturedefs):
self.argnames = argnames self.argnames = argnames
self.names_closure = names_closure self.names_closure = names_closure
@ -442,7 +432,7 @@ class FixtureRequest(FuncargnamesCompatAttr):
fixturedef = self._getnextfixturedef(argname) fixturedef = self._getnextfixturedef(argname)
except FixtureLookupError: except FixtureLookupError:
if argname == "request": if argname == "request":
class PseudoFixtureDef: class PseudoFixtureDef(object):
cached_result = (self, [0], None) cached_result = (self, [0], None)
scope = "function" scope = "function"
return PseudoFixtureDef return PseudoFixtureDef
@ -718,7 +708,7 @@ def call_fixture_func(fixturefunc, request, kwargs):
return res return res
class FixtureDef: class FixtureDef(object):
""" A container for a factory definition. """ """ A container for a factory definition. """
def __init__(self, fixturemanager, baseid, argname, func, scope, params, def __init__(self, fixturemanager, baseid, argname, func, scope, params,
@ -924,7 +914,7 @@ def pytestconfig(request):
return request.config return request.config
class FixtureManager: class FixtureManager(object):
""" """
pytest fixtures definitions and information is stored and managed pytest fixtures definitions and information is stored and managed
from this class. from this class.

View File

@ -57,9 +57,9 @@ def pytest_addoption(parser):
action="store_true", dest="debug", default=False, action="store_true", dest="debug", default=False,
help="store internal tracing debug information in 'pytestdebug.log'.") help="store internal tracing debug information in 'pytestdebug.log'.")
group._addoption( group._addoption(
'-o', '--override-ini', nargs='*', dest="override_ini", '-o', '--override-ini', dest="override_ini",
action="append", action="append",
help="override config option with option=value style, e.g. `-o xfail_strict=True`.") help='override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`.')
@pytest.hookimpl(hookwrapper=True) @pytest.hookimpl(hookwrapper=True)

View File

@ -179,7 +179,7 @@ def pytest_collection_modifyitems(session, config, items):
:param _pytest.main.Session session: the pytest session object :param _pytest.main.Session session: the pytest session object
:param _pytest.config.Config config: pytest config object :param _pytest.config.Config config: pytest config object
:param List[_pytest.main.Item] items: list of item objects :param List[_pytest.nodes.Item] items: list of item objects
""" """
@ -330,7 +330,25 @@ def pytest_runtest_protocol(item, nextitem):
def pytest_runtest_logstart(nodeid, location): def pytest_runtest_logstart(nodeid, location):
""" signal the start of running a single test item. """ """ signal the start of running a single test item.
This hook will be called **before** :func:`pytest_runtest_setup`, :func:`pytest_runtest_call` and
:func:`pytest_runtest_teardown` hooks.
:param str nodeid: full id of the item
:param location: a triple of ``(filename, linenum, testname)``
"""
def pytest_runtest_logfinish(nodeid, location):
""" signal the complete finish of running a single test item.
This hook will be called **after** :func:`pytest_runtest_setup`, :func:`pytest_runtest_call` and
:func:`pytest_runtest_teardown` hooks.
:param str nodeid: full id of the item
:param location: a triple of ``(filename, linenum, testname)``
"""
def pytest_runtest_setup(item): def pytest_runtest_setup(item):
@ -479,7 +497,7 @@ def pytest_terminal_summary(terminalreporter, exitstatus):
def pytest_logwarning(message, code, nodeid, fslocation): def pytest_logwarning(message, code, nodeid, fslocation):
""" process a warning specified by a message, a code string, """ process a warning specified by a message, a code string,
a nodeid and fslocation (both of which may be None a nodeid and fslocation (both of which may be None
if the warning is not tied to a partilar node/location). if the warning is not tied to a particular node/location).
.. note:: .. note::
This hook is incompatible with ``hookwrapper=True``. This hook is incompatible with ``hookwrapper=True``.

View File

@ -85,6 +85,9 @@ class _NodeReporter(object):
def add_property(self, name, value): def add_property(self, name, value):
self.properties.append((str(name), bin_xml_escape(value))) self.properties.append((str(name), bin_xml_escape(value)))
def add_attribute(self, name, value):
self.attrs[str(name)] = bin_xml_escape(value)
def make_properties_node(self): def make_properties_node(self):
"""Return a Junit node containing custom properties, if any. """Return a Junit node containing custom properties, if any.
""" """
@ -98,6 +101,7 @@ class _NodeReporter(object):
def record_testreport(self, testreport): def record_testreport(self, testreport):
assert not self.testcase assert not self.testcase
names = mangle_test_address(testreport.nodeid) names = mangle_test_address(testreport.nodeid)
existing_attrs = self.attrs
classnames = names[:-1] classnames = names[:-1]
if self.xml.prefix: if self.xml.prefix:
classnames.insert(0, self.xml.prefix) classnames.insert(0, self.xml.prefix)
@ -111,6 +115,7 @@ class _NodeReporter(object):
if hasattr(testreport, "url"): if hasattr(testreport, "url"):
attrs["url"] = testreport.url attrs["url"] = testreport.url
self.attrs = attrs self.attrs = attrs
self.attrs.update(existing_attrs) # restore any user-defined attributes
def to_xml(self): def to_xml(self):
testcase = Junit.testcase(time=self.duration, **self.attrs) testcase = Junit.testcase(time=self.duration, **self.attrs)
@ -211,6 +216,27 @@ def record_xml_property(request):
return add_property_noop return add_property_noop
@pytest.fixture
def record_xml_attribute(request):
"""Add extra xml attributes to the tag for the calling test.
The fixture is callable with ``(name, value)``, with value being automatically
xml-encoded
"""
request.node.warn(
code='C3',
message='record_xml_attribute is an experimental feature',
)
xml = getattr(request.config, "_xml", None)
if xml is not None:
node_reporter = xml.node_reporter(request.node.nodeid)
return node_reporter.add_attribute
else:
def add_attr_noop(name, value):
pass
return add_attr_noop
def pytest_addoption(parser): def pytest_addoption(parser):
group = parser.getgroup("terminal reporting") group = parser.getgroup("terminal reporting")
group.addoption( group.addoption(

View File

@ -2,9 +2,10 @@ from __future__ import absolute_import, division, print_function
import logging import logging
from contextlib import closing, contextmanager from contextlib import closing, contextmanager
import sys import re
import six import six
from _pytest.config import create_terminal_writer
import pytest import pytest
import py import py
@ -13,6 +14,58 @@ DEFAULT_LOG_FORMAT = '%(filename)-25s %(lineno)4d %(levelname)-8s %(message)s'
DEFAULT_LOG_DATE_FORMAT = '%H:%M:%S' DEFAULT_LOG_DATE_FORMAT = '%H:%M:%S'
class ColoredLevelFormatter(logging.Formatter):
"""
Colorize the %(levelname)..s part of the log format passed to __init__.
"""
LOGLEVEL_COLOROPTS = {
logging.CRITICAL: {'red'},
logging.ERROR: {'red', 'bold'},
logging.WARNING: {'yellow'},
logging.WARN: {'yellow'},
logging.INFO: {'green'},
logging.DEBUG: {'purple'},
logging.NOTSET: set(),
}
LEVELNAME_FMT_REGEX = re.compile(r'%\(levelname\)([+-]?\d*s)')
def __init__(self, terminalwriter, *args, **kwargs):
super(ColoredLevelFormatter, self).__init__(
*args, **kwargs)
if six.PY2:
self._original_fmt = self._fmt
else:
self._original_fmt = self._style._fmt
self._level_to_fmt_mapping = {}
levelname_fmt_match = self.LEVELNAME_FMT_REGEX.search(self._fmt)
if not levelname_fmt_match:
return
levelname_fmt = levelname_fmt_match.group()
for level, color_opts in self.LOGLEVEL_COLOROPTS.items():
formatted_levelname = levelname_fmt % {
'levelname': logging.getLevelName(level)}
# add ANSI escape sequences around the formatted levelname
color_kwargs = {name: True for name in color_opts}
colorized_formatted_levelname = terminalwriter.markup(
formatted_levelname, **color_kwargs)
self._level_to_fmt_mapping[level] = self.LEVELNAME_FMT_REGEX.sub(
colorized_formatted_levelname,
self._fmt)
def format(self, record):
fmt = self._level_to_fmt_mapping.get(
record.levelno, self._original_fmt)
if six.PY2:
self._fmt = fmt
else:
self._style._fmt = fmt
return super(ColoredLevelFormatter, self).format(record)
def get_option_ini(config, *names): def get_option_ini(config, *names):
for name in names: for name in names:
ret = config.getoption(name) # 'default' arg won't work as expected ret = config.getoption(name) # 'default' arg won't work as expected
@ -48,6 +101,9 @@ def pytest_addoption(parser):
'--log-date-format', '--log-date-format',
dest='log_date_format', default=DEFAULT_LOG_DATE_FORMAT, dest='log_date_format', default=DEFAULT_LOG_DATE_FORMAT,
help='log date format as used by the logging module.') help='log date format as used by the logging module.')
parser.addini(
'log_cli', default=False, type='bool',
help='enable log display during test run (also known as "live logging").')
add_option_ini( add_option_ini(
'--log-cli-level', '--log-cli-level',
dest='log_cli_level', default=None, dest='log_cli_level', default=None,
@ -79,13 +135,14 @@ def pytest_addoption(parser):
@contextmanager @contextmanager
def catching_logs(handler, formatter=None, level=logging.NOTSET): def catching_logs(handler, formatter=None, level=None):
"""Context manager that prepares the whole logging machinery properly.""" """Context manager that prepares the whole logging machinery properly."""
root_logger = logging.getLogger() root_logger = logging.getLogger()
if formatter is not None: if formatter is not None:
handler.setFormatter(formatter) handler.setFormatter(formatter)
handler.setLevel(level) if level is not None:
handler.setLevel(level)
# Adding the same handler twice would confuse logging system. # Adding the same handler twice would confuse logging system.
# Just don't do that. # Just don't do that.
@ -93,12 +150,14 @@ def catching_logs(handler, formatter=None, level=logging.NOTSET):
if add_new_handler: if add_new_handler:
root_logger.addHandler(handler) root_logger.addHandler(handler)
orig_level = root_logger.level if level is not None:
root_logger.setLevel(min(orig_level, level)) orig_level = root_logger.level
root_logger.setLevel(level)
try: try:
yield handler yield handler
finally: finally:
root_logger.setLevel(orig_level) if level is not None:
root_logger.setLevel(orig_level)
if add_new_handler: if add_new_handler:
root_logger.removeHandler(handler) root_logger.removeHandler(handler)
@ -123,11 +182,40 @@ class LogCaptureFixture(object):
def __init__(self, item): def __init__(self, item):
"""Creates a new funcarg.""" """Creates a new funcarg."""
self._item = item self._item = item
self._initial_log_levels = {} # type: Dict[str, int] # dict of log name -> log level
def _finalize(self):
"""Finalizes the fixture.
This restores the log levels changed by :meth:`set_level`.
"""
# restore log levels
for logger_name, level in self._initial_log_levels.items():
logger = logging.getLogger(logger_name)
logger.setLevel(level)
@property @property
def handler(self): def handler(self):
return self._item.catch_log_handler return self._item.catch_log_handler
def get_records(self, when):
"""
Get the logging records for one of the possible test phases.
:param str when:
Which test phase to obtain the records from. Valid values are: "setup", "call" and "teardown".
:rtype: List[logging.LogRecord]
:return: the list of captured records at the given stage
.. versionadded:: 3.4
"""
handler = self._item.catch_log_handlers.get(when)
if handler:
return handler.records
else:
return []
@property @property
def text(self): def text(self):
"""Returns the log text.""" """Returns the log text."""
@ -154,31 +242,31 @@ class LogCaptureFixture(object):
self.handler.records = [] self.handler.records = []
def set_level(self, level, logger=None): def set_level(self, level, logger=None):
"""Sets the level for capturing of logs. """Sets the level for capturing of logs. The level will be restored to its previous value at the end of
the test.
By default, the level is set on the handler used to capture :param int level: the logger to level.
logs. Specify a logger name to instead set the level of any :param str logger: the logger to update the level. If not given, the root logger level is updated.
logger.
.. versionchanged:: 3.4
The levels of the loggers changed by this function will be restored to their initial values at the
end of the test.
""" """
if logger is None: logger_name = logger
logger = self.handler logger = logging.getLogger(logger_name)
else: # save the original log-level to restore it during teardown
logger = logging.getLogger(logger) self._initial_log_levels.setdefault(logger_name, logger.level)
logger.setLevel(level) logger.setLevel(level)
@contextmanager @contextmanager
def at_level(self, level, logger=None): def at_level(self, level, logger=None):
"""Context manager that sets the level for capturing of logs. """Context manager that sets the level for capturing of logs. After the end of the 'with' statement the
level is restored to its original value.
By default, the level is set on the handler used to capture :param int level: the logger to level.
logs. Specify a logger name to instead set the level of any :param str logger: the logger to update the level. If not given, the root logger level is updated.
logger.
""" """
if logger is None: logger = logging.getLogger(logger)
logger = self.handler
else:
logger = logging.getLogger(logger)
orig_level = logger.level orig_level = logger.level
logger.setLevel(level) logger.setLevel(level)
try: try:
@ -197,7 +285,9 @@ def caplog(request):
* caplog.records() -> list of logging.LogRecord instances * caplog.records() -> list of logging.LogRecord instances
* caplog.record_tuples() -> list of (logger_name, level, message) tuples * caplog.record_tuples() -> list of (logger_name, level, message) tuples
""" """
return LogCaptureFixture(request.node) result = LogCaptureFixture(request.node)
yield result
result._finalize()
def get_actual_log_level(config, *setting_names): def get_actual_log_level(config, *setting_names):
@ -227,8 +317,12 @@ def get_actual_log_level(config, *setting_names):
def pytest_configure(config): def pytest_configure(config):
config.pluginmanager.register(LoggingPlugin(config), config.pluginmanager.register(LoggingPlugin(config), 'logging-plugin')
'logging-plugin')
@contextmanager
def _dummy_context_manager():
yield
class LoggingPlugin(object): class LoggingPlugin(object):
@ -241,57 +335,52 @@ class LoggingPlugin(object):
The formatter can be safely shared across all handlers so The formatter can be safely shared across all handlers so
create a single one for the entire test session here. create a single one for the entire test session here.
""" """
self.log_cli_level = get_actual_log_level( self._config = config
config, 'log_cli_level', 'log_level') or logging.WARNING
# enable verbose output automatically if live logging is enabled
if self._config.getini('log_cli') and not config.getoption('verbose'):
# sanity check: terminal reporter should not have been loaded at this point
assert self._config.pluginmanager.get_plugin('terminalreporter') is None
config.option.verbose = 1
self.print_logs = get_option_ini(config, 'log_print') self.print_logs = get_option_ini(config, 'log_print')
self.formatter = logging.Formatter( self.formatter = logging.Formatter(get_option_ini(config, 'log_format'),
get_option_ini(config, 'log_format'), get_option_ini(config, 'log_date_format'))
get_option_ini(config, 'log_date_format')) self.log_level = get_actual_log_level(config, 'log_level')
log_cli_handler = logging.StreamHandler(sys.stderr)
log_cli_format = get_option_ini(
config, 'log_cli_format', 'log_format')
log_cli_date_format = get_option_ini(
config, 'log_cli_date_format', 'log_date_format')
log_cli_formatter = logging.Formatter(
log_cli_format,
datefmt=log_cli_date_format)
self.log_cli_handler = log_cli_handler # needed for a single unittest
self.live_logs = catching_logs(log_cli_handler,
formatter=log_cli_formatter,
level=self.log_cli_level)
log_file = get_option_ini(config, 'log_file') log_file = get_option_ini(config, 'log_file')
if log_file: if log_file:
self.log_file_level = get_actual_log_level( self.log_file_level = get_actual_log_level(config, 'log_file_level')
config, 'log_file_level') or logging.WARNING
log_file_format = get_option_ini( log_file_format = get_option_ini(config, 'log_file_format', 'log_format')
config, 'log_file_format', 'log_format') log_file_date_format = get_option_ini(config, 'log_file_date_format', 'log_date_format')
log_file_date_format = get_option_ini( # Each pytest runtests session will write to a clean logfile
config, 'log_file_date_format', 'log_date_format') self.log_file_handler = logging.FileHandler(log_file, mode='w')
self.log_file_handler = logging.FileHandler( log_file_formatter = logging.Formatter(log_file_format, datefmt=log_file_date_format)
log_file,
# Each pytest runtests session will write to a clean logfile
mode='w')
log_file_formatter = logging.Formatter(
log_file_format,
datefmt=log_file_date_format)
self.log_file_handler.setFormatter(log_file_formatter) self.log_file_handler.setFormatter(log_file_formatter)
else: else:
self.log_file_handler = None self.log_file_handler = None
# initialized during pytest_runtestloop
self.log_cli_handler = None
@contextmanager @contextmanager
def _runtest_for(self, item, when): def _runtest_for(self, item, when):
"""Implements the internals of pytest_runtest_xxx() hook.""" """Implements the internals of pytest_runtest_xxx() hook."""
with catching_logs(LogCaptureHandler(), with catching_logs(LogCaptureHandler(),
formatter=self.formatter) as log_handler: formatter=self.formatter, level=self.log_level) as log_handler:
if self.log_cli_handler:
self.log_cli_handler.set_when(when)
if not hasattr(item, 'catch_log_handlers'):
item.catch_log_handlers = {}
item.catch_log_handlers[when] = log_handler
item.catch_log_handler = log_handler item.catch_log_handler = log_handler
try: try:
yield # run test yield # run test
finally: finally:
del item.catch_log_handler del item.catch_log_handler
if when == 'teardown':
del item.catch_log_handlers
if self.print_logs: if self.print_logs:
# Add a captured log section to the report. # Add a captured log section to the report.
@ -313,10 +402,15 @@ class LoggingPlugin(object):
with self._runtest_for(item, 'teardown'): with self._runtest_for(item, 'teardown'):
yield yield
def pytest_runtest_logstart(self):
if self.log_cli_handler:
self.log_cli_handler.reset()
@pytest.hookimpl(hookwrapper=True) @pytest.hookimpl(hookwrapper=True)
def pytest_runtestloop(self, session): def pytest_runtestloop(self, session):
"""Runs all collected test items.""" """Runs all collected test items."""
with self.live_logs: self._setup_cli_logging()
with self.live_logs_context:
if self.log_file_handler is not None: if self.log_file_handler is not None:
with closing(self.log_file_handler): with closing(self.log_file_handler):
with catching_logs(self.log_file_handler, with catching_logs(self.log_file_handler,
@ -324,3 +418,69 @@ class LoggingPlugin(object):
yield # run all the tests yield # run all the tests
else: else:
yield # run all the tests yield # run all the tests
def _setup_cli_logging(self):
"""Sets up the handler and logger for the Live Logs feature, if enabled.
This must be done right before starting the loop so we can access the terminal reporter plugin.
"""
terminal_reporter = self._config.pluginmanager.get_plugin('terminalreporter')
if self._config.getini('log_cli') and terminal_reporter is not None:
capture_manager = self._config.pluginmanager.get_plugin('capturemanager')
log_cli_handler = _LiveLoggingStreamHandler(terminal_reporter, capture_manager)
log_cli_format = get_option_ini(self._config, 'log_cli_format', 'log_format')
log_cli_date_format = get_option_ini(self._config, 'log_cli_date_format', 'log_date_format')
if self._config.option.color != 'no' and ColoredLevelFormatter.LEVELNAME_FMT_REGEX.search(log_cli_format):
log_cli_formatter = ColoredLevelFormatter(create_terminal_writer(self._config),
log_cli_format, datefmt=log_cli_date_format)
else:
log_cli_formatter = logging.Formatter(log_cli_format, datefmt=log_cli_date_format)
log_cli_level = get_actual_log_level(self._config, 'log_cli_level', 'log_level')
self.log_cli_handler = log_cli_handler
self.live_logs_context = catching_logs(log_cli_handler, formatter=log_cli_formatter, level=log_cli_level)
else:
self.live_logs_context = _dummy_context_manager()
class _LiveLoggingStreamHandler(logging.StreamHandler):
"""
Custom StreamHandler used by the live logging feature: it will write a newline before the first log message
in each test.
During live logging we must also explicitly disable stdout/stderr capturing otherwise it will get captured
and won't appear in the terminal.
"""
def __init__(self, terminal_reporter, capture_manager):
"""
:param _pytest.terminal.TerminalReporter terminal_reporter:
:param _pytest.capture.CaptureManager capture_manager:
"""
logging.StreamHandler.__init__(self, stream=terminal_reporter)
self.capture_manager = capture_manager
self.reset()
self.set_when(None)
def reset(self):
"""Reset the handler; should be called before the start of each test"""
self._first_record_emitted = False
def set_when(self, when):
"""Prepares for the given test phase (setup/call/teardown)"""
self._when = when
self._section_name_shown = False
def emit(self, record):
if self.capture_manager is not None:
self.capture_manager.suspend_global_capture()
try:
if not self._first_record_emitted or self._when == 'teardown':
self.stream.write('\n')
self._first_record_emitted = True
if not self._section_name_shown:
self.stream.section('live log ' + self._when, sep='-', bold=True)
self._section_name_shown = True
logging.StreamHandler.emit(self, record)
finally:
if self.capture_manager is not None:
self.capture_manager.resume_global_capture()

View File

@ -12,16 +12,11 @@ import _pytest
from _pytest import nodes from _pytest import nodes
import _pytest._code import _pytest._code
import py import py
try:
from collections import MutableMapping as MappingMixin
except ImportError:
from UserDict import DictMixin as MappingMixin
from _pytest.config import directory_arg, UsageError, hookimpl from _pytest.config import directory_arg, UsageError, hookimpl
from _pytest.outcomes import exit from _pytest.outcomes import exit
from _pytest.runner import collect_one_node from _pytest.runner import collect_one_node
tracebackcutdir = py.path.local(_pytest.__file__).dirpath()
# exitcodes for the command line # exitcodes for the command line
EXIT_OK = 0 EXIT_OK = 0
@ -248,7 +243,7 @@ def _patched_find_module():
yield yield
class FSHookProxy: class FSHookProxy(object):
def __init__(self, fspath, pm, remove_mods): def __init__(self, fspath, pm, remove_mods):
self.fspath = fspath self.fspath = fspath
self.pm = pm self.pm = pm
@ -260,356 +255,6 @@ class FSHookProxy:
return x return x
class _CompatProperty(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, owner):
if obj is None:
return self
# TODO: reenable in the features branch
# warnings.warn(
# "usage of {owner!r}.{name} is deprecated, please use pytest.{name} instead".format(
# name=self.name, owner=type(owner).__name__),
# PendingDeprecationWarning, stacklevel=2)
return getattr(__import__('pytest'), self.name)
class NodeKeywords(MappingMixin):
def __init__(self, node):
self.node = node
self.parent = node.parent
self._markers = {node.name: True}
def __getitem__(self, key):
try:
return self._markers[key]
except KeyError:
if self.parent is None:
raise
return self.parent.keywords[key]
def __setitem__(self, key, value):
self._markers[key] = value
def __delitem__(self, key):
raise ValueError("cannot delete key in keywords dict")
def __iter__(self):
seen = set(self._markers)
if self.parent is not None:
seen.update(self.parent.keywords)
return iter(seen)
def __len__(self):
return len(self.__iter__())
def keys(self):
return list(self)
def __repr__(self):
return "<NodeKeywords for node %s>" % (self.node, )
class Node(object):
""" base class for Collector and Item the test collection tree.
Collector subclasses have children, Items are terminal nodes."""
def __init__(self, name, parent=None, config=None, session=None):
#: a unique name within the scope of the parent node
self.name = name
#: the parent collector node.
self.parent = parent
#: the pytest config object
self.config = config or parent.config
#: the session this node is part of
self.session = session or parent.session
#: filesystem path where this node was collected from (can be None)
self.fspath = getattr(parent, 'fspath', None)
#: keywords/markers collected from all scopes
self.keywords = NodeKeywords(self)
#: allow adding of extra keywords to use for matching
self.extra_keyword_matches = set()
# used for storing artificial fixturedefs for direct parametrization
self._name2pseudofixturedef = {}
@property
def ihook(self):
""" fspath sensitive hook proxy used to call pytest hooks"""
return self.session.gethookproxy(self.fspath)
Module = _CompatProperty("Module")
Class = _CompatProperty("Class")
Instance = _CompatProperty("Instance")
Function = _CompatProperty("Function")
File = _CompatProperty("File")
Item = _CompatProperty("Item")
def _getcustomclass(self, name):
maybe_compatprop = getattr(type(self), name)
if isinstance(maybe_compatprop, _CompatProperty):
return getattr(__import__('pytest'), name)
else:
cls = getattr(self, name)
# TODO: reenable in the features branch
# warnings.warn("use of node.%s is deprecated, "
# "use pytest_pycollect_makeitem(...) to create custom "
# "collection nodes" % name, category=DeprecationWarning)
return cls
def __repr__(self):
return "<%s %r>" % (self.__class__.__name__,
getattr(self, 'name', None))
def warn(self, code, message):
""" generate a warning with the given code and message for this
item. """
assert isinstance(code, str)
fslocation = getattr(self, "location", None)
if fslocation is None:
fslocation = getattr(self, "fspath", None)
self.ihook.pytest_logwarning.call_historic(kwargs=dict(
code=code, message=message,
nodeid=self.nodeid, fslocation=fslocation))
# methods for ordering nodes
@property
def nodeid(self):
""" a ::-separated string denoting its collection tree address. """
try:
return self._nodeid
except AttributeError:
self._nodeid = x = self._makeid()
return x
def _makeid(self):
return self.parent.nodeid + "::" + self.name
def __hash__(self):
return hash(self.nodeid)
def setup(self):
pass
def teardown(self):
pass
def listchain(self):
""" return list of all parent collectors up to self,
starting from root of collection tree. """
chain = []
item = self
while item is not None:
chain.append(item)
item = item.parent
chain.reverse()
return chain
def add_marker(self, marker):
""" dynamically add a marker object to the node.
``marker`` can be a string or pytest.mark.* instance.
"""
from _pytest.mark import MarkDecorator, MARK_GEN
if isinstance(marker, six.string_types):
marker = getattr(MARK_GEN, marker)
elif not isinstance(marker, MarkDecorator):
raise ValueError("is not a string or pytest.mark.* Marker")
self.keywords[marker.name] = marker
def get_marker(self, name):
""" get a marker object from this node or None if
the node doesn't have a marker with that name. """
val = self.keywords.get(name, None)
if val is not None:
from _pytest.mark import MarkInfo, MarkDecorator
if isinstance(val, (MarkDecorator, MarkInfo)):
return val
def listextrakeywords(self):
""" Return a set of all extra keywords in self and any parents."""
extra_keywords = set()
item = self
for item in self.listchain():
extra_keywords.update(item.extra_keyword_matches)
return extra_keywords
def listnames(self):
return [x.name for x in self.listchain()]
def addfinalizer(self, fin):
""" register a function to be called when this node is finalized.
This method can only be called when this node is active
in a setup chain, for example during self.setup().
"""
self.session._setupstate.addfinalizer(fin, self)
def getparent(self, cls):
""" get the next parent node (including ourself)
which is an instance of the given class"""
current = self
while current and not isinstance(current, cls):
current = current.parent
return current
def _prunetraceback(self, excinfo):
pass
def _repr_failure_py(self, excinfo, style=None):
fm = self.session._fixturemanager
if excinfo.errisinstance(fm.FixtureLookupError):
return excinfo.value.formatrepr()
tbfilter = True
if self.config.option.fulltrace:
style = "long"
else:
tb = _pytest._code.Traceback([excinfo.traceback[-1]])
self._prunetraceback(excinfo)
if len(excinfo.traceback) == 0:
excinfo.traceback = tb
tbfilter = False # prunetraceback already does it
if style == "auto":
style = "long"
# XXX should excinfo.getrepr record all data and toterminal() process it?
if style is None:
if self.config.option.tbstyle == "short":
style = "short"
else:
style = "long"
try:
os.getcwd()
abspath = False
except OSError:
abspath = True
return excinfo.getrepr(funcargs=True, abspath=abspath,
showlocals=self.config.option.showlocals,
style=style, tbfilter=tbfilter)
repr_failure = _repr_failure_py
class Collector(Node):
""" Collector instances create children through collect()
and thus iteratively build a tree.
"""
class CollectError(Exception):
""" an error during collection, contains a custom message. """
def collect(self):
""" returns a list of children (items and collectors)
for this collection node.
"""
raise NotImplementedError("abstract")
def repr_failure(self, excinfo):
""" represent a collection failure. """
if excinfo.errisinstance(self.CollectError):
exc = excinfo.value
return str(exc.args[0])
return self._repr_failure_py(excinfo, style="short")
def _prunetraceback(self, excinfo):
if hasattr(self, 'fspath'):
traceback = excinfo.traceback
ntraceback = traceback.cut(path=self.fspath)
if ntraceback == traceback:
ntraceback = ntraceback.cut(excludepath=tracebackcutdir)
excinfo.traceback = ntraceback.filter()
class FSCollector(Collector):
def __init__(self, fspath, parent=None, config=None, session=None):
fspath = py.path.local(fspath) # xxx only for test_resultlog.py?
name = fspath.basename
if parent is not None:
rel = fspath.relto(parent.fspath)
if rel:
name = rel
name = name.replace(os.sep, nodes.SEP)
super(FSCollector, self).__init__(name, parent, config, session)
self.fspath = fspath
def _check_initialpaths_for_relpath(self):
for initialpath in self.session._initialpaths:
if self.fspath.common(initialpath) == initialpath:
return self.fspath.relto(initialpath.dirname)
def _makeid(self):
relpath = self.fspath.relto(self.config.rootdir)
if not relpath:
relpath = self._check_initialpaths_for_relpath()
if os.sep != nodes.SEP:
relpath = relpath.replace(os.sep, nodes.SEP)
return relpath
class File(FSCollector):
""" base class for collecting tests from a file. """
class Item(Node):
""" a basic test invocation item. Note that for a single function
there might be multiple test invocation items.
"""
nextitem = None
def __init__(self, name, parent=None, config=None, session=None):
super(Item, self).__init__(name, parent, config, session)
self._report_sections = []
def add_report_section(self, when, key, content):
"""
Adds a new report section, similar to what's done internally to add stdout and
stderr captured output::
item.add_report_section("call", "stdout", "report section contents")
:param str when:
One of the possible capture states, ``"setup"``, ``"call"``, ``"teardown"``.
:param str key:
Name of the section, can be customized at will. Pytest uses ``"stdout"`` and
``"stderr"`` internally.
:param str content:
The full contents as a string.
"""
if content:
self._report_sections.append((when, key, content))
def reportinfo(self):
return self.fspath, None, ""
@property
def location(self):
try:
return self._location
except AttributeError:
location = self.reportinfo()
# bestrelpath is a quite slow function
cache = self.config.__dict__.setdefault("_bestrelpathcache", {})
try:
fspath = cache[location[0]]
except KeyError:
fspath = self.session.fspath.bestrelpath(location[0])
cache[location[0]] = fspath
location = (fspath, location[1], str(location[2]))
self._location = location
return location
class NoMatch(Exception): class NoMatch(Exception):
""" raised if matching cannot locate a matching names. """ """ raised if matching cannot locate a matching names. """
@ -623,13 +268,14 @@ class Failed(Exception):
""" signals an stop as failed test run. """ """ signals an stop as failed test run. """
class Session(FSCollector): class Session(nodes.FSCollector):
Interrupted = Interrupted Interrupted = Interrupted
Failed = Failed Failed = Failed
def __init__(self, config): def __init__(self, config):
FSCollector.__init__(self, config.rootdir, parent=None, nodes.FSCollector.__init__(
config=config, session=self) self, config.rootdir, parent=None,
config=config, session=self)
self.testsfailed = 0 self.testsfailed = 0
self.testscollected = 0 self.testscollected = 0
self.shouldstop = False self.shouldstop = False
@ -826,11 +472,11 @@ class Session(FSCollector):
nextnames = names[1:] nextnames = names[1:]
resultnodes = [] resultnodes = []
for node in matching: for node in matching:
if isinstance(node, Item): if isinstance(node, nodes.Item):
if not names: if not names:
resultnodes.append(node) resultnodes.append(node)
continue continue
assert isinstance(node, Collector) assert isinstance(node, nodes.Collector)
rep = collect_one_node(node) rep = collect_one_node(node)
if rep.passed: if rep.passed:
has_matched = False has_matched = False
@ -852,11 +498,11 @@ class Session(FSCollector):
def genitems(self, node): def genitems(self, node):
self.trace("genitems", node) self.trace("genitems", node)
if isinstance(node, Item): if isinstance(node, nodes.Item):
node.ihook.pytest_itemcollected(item=node) node.ihook.pytest_itemcollected(item=node)
yield node yield node
else: else:
assert isinstance(node, Collector) assert isinstance(node, nodes.Collector)
rep = collect_one_node(node) rep = collect_one_node(node)
if rep.passed: if rep.passed:
for subnode in rep.result: for subnode in rep.result:

View File

@ -2,14 +2,19 @@
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import inspect import inspect
import keyword
import warnings import warnings
import attr import attr
from collections import namedtuple from collections import namedtuple
from operator import attrgetter from operator import attrgetter
from six.moves import map from six.moves import map
from _pytest.config import UsageError
from .deprecated import MARK_PARAMETERSET_UNPACKING from .deprecated import MARK_PARAMETERSET_UNPACKING
from .compat import NOTSET, getfslineno from .compat import NOTSET, getfslineno
EMPTY_PARAMETERSET_OPTION = "empty_parameter_set_mark"
def alias(name, warning=None): def alias(name, warning=None):
getter = attrgetter(name) getter = attrgetter(name)
@ -82,10 +87,7 @@ class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
del argvalues del argvalues
if not parameters: if not parameters:
fs, lineno = getfslineno(function) mark = get_empty_parameterset_mark(config, argnames, function)
reason = "got empty parameter set %r, function %s at %s:%d" % (
argnames, function.__name__, fs, lineno)
mark = MARK_GEN.skip(reason=reason)
parameters.append(ParameterSet( parameters.append(ParameterSet(
values=(NOTSET,) * len(argnames), values=(NOTSET,) * len(argnames),
marks=[mark], marks=[mark],
@ -94,6 +96,20 @@ class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
return argnames, parameters return argnames, parameters
def get_empty_parameterset_mark(config, argnames, function):
requested_mark = config.getini(EMPTY_PARAMETERSET_OPTION)
if requested_mark in ('', None, 'skip'):
mark = MARK_GEN.skip
elif requested_mark == 'xfail':
mark = MARK_GEN.xfail(run=False)
else:
raise LookupError(requested_mark)
fs, lineno = getfslineno(function)
reason = "got empty parameter set %r, function %s at %s:%d" % (
argnames, function.__name__, fs, lineno)
return mark(reason=reason)
class MarkerError(Exception): class MarkerError(Exception):
"""Error in use of a pytest marker/attribute.""" """Error in use of a pytest marker/attribute."""
@ -133,6 +149,9 @@ def pytest_addoption(parser):
) )
parser.addini("markers", "markers for test functions", 'linelist') parser.addini("markers", "markers for test functions", 'linelist')
parser.addini(
EMPTY_PARAMETERSET_OPTION,
"default marker for empty parametersets")
def pytest_cmdline_main(config): def pytest_cmdline_main(config):
@ -222,6 +241,9 @@ class KeywordMapping(object):
return False return False
python_keywords_allowed_list = ["or", "and", "not"]
def matchmark(colitem, markexpr): def matchmark(colitem, markexpr):
"""Tries to match on any marker names, attached to the given colitem.""" """Tries to match on any marker names, attached to the given colitem."""
return eval(markexpr, {}, MarkMapping.from_keywords(colitem.keywords)) return eval(markexpr, {}, MarkMapping.from_keywords(colitem.keywords))
@ -259,7 +281,13 @@ def matchkeyword(colitem, keywordexpr):
return mapping[keywordexpr] return mapping[keywordexpr]
elif keywordexpr.startswith("not ") and " " not in keywordexpr[4:]: elif keywordexpr.startswith("not ") and " " not in keywordexpr[4:]:
return not mapping[keywordexpr[4:]] return not mapping[keywordexpr[4:]]
return eval(keywordexpr, {}, mapping) for kwd in keywordexpr.split():
if keyword.iskeyword(kwd) and kwd not in python_keywords_allowed_list:
raise UsageError("Python keyword '{}' not accepted in expressions passed to '-k'".format(kwd))
try:
return eval(keywordexpr, {}, mapping)
except SyntaxError:
raise UsageError("Wrong expression passed to '-k': {}".format(keywordexpr))
def pytest_configure(config): def pytest_configure(config):
@ -267,12 +295,19 @@ def pytest_configure(config):
if config.option.strict: if config.option.strict:
MARK_GEN._config = config MARK_GEN._config = config
empty_parameterset = config.getini(EMPTY_PARAMETERSET_OPTION)
if empty_parameterset not in ('skip', 'xfail', None, ''):
raise UsageError(
"{!s} must be one of skip and xfail,"
" but it is {!r}".format(EMPTY_PARAMETERSET_OPTION, empty_parameterset))
def pytest_unconfigure(config): def pytest_unconfigure(config):
MARK_GEN._config = getattr(config, '_old_mark_config', None) MARK_GEN._config = getattr(config, '_old_mark_config', None)
class MarkGenerator: class MarkGenerator(object):
""" Factory for :class:`MarkDecorator` objects - exposed as """ Factory for :class:`MarkDecorator` objects - exposed as
a ``pytest.mark`` singleton instance. Example:: a ``pytest.mark`` singleton instance. Example::

View File

@ -88,7 +88,7 @@ def derive_importpath(import_path, raising):
return attr, target return attr, target
class Notset: class Notset(object):
def __repr__(self): def __repr__(self):
return "<notset>" return "<notset>"
@ -96,7 +96,7 @@ class Notset:
notset = Notset() notset = Notset()
class MonkeyPatch: class MonkeyPatch(object):
""" Object returned by the ``monkeypatch`` fixture keeping a record of setattr/item/env/syspath changes. """ Object returned by the ``monkeypatch`` fixture keeping a record of setattr/item/env/syspath changes.
""" """

View File

@ -1,5 +1,18 @@
from __future__ import absolute_import, division, print_function
from collections import MutableMapping as MappingMixin
import os
import six
import py
import attr
import _pytest
SEP = "/" SEP = "/"
tracebackcutdir = py.path.local(_pytest.__file__).dirpath()
def _splitnode(nodeid): def _splitnode(nodeid):
"""Split a nodeid into constituent 'parts'. """Split a nodeid into constituent 'parts'.
@ -35,3 +48,353 @@ def ischildnode(baseid, nodeid):
if len(node_parts) < len(base_parts): if len(node_parts) < len(base_parts):
return False return False
return node_parts[:len(base_parts)] == base_parts return node_parts[:len(base_parts)] == base_parts
@attr.s
class _CompatProperty(object):
name = attr.ib()
def __get__(self, obj, owner):
if obj is None:
return self
# TODO: reenable in the features branch
# warnings.warn(
# "usage of {owner!r}.{name} is deprecated, please use pytest.{name} instead".format(
# name=self.name, owner=type(owner).__name__),
# PendingDeprecationWarning, stacklevel=2)
return getattr(__import__('pytest'), self.name)
class NodeKeywords(MappingMixin):
def __init__(self, node):
self.node = node
self.parent = node.parent
self._markers = {node.name: True}
def __getitem__(self, key):
try:
return self._markers[key]
except KeyError:
if self.parent is None:
raise
return self.parent.keywords[key]
def __setitem__(self, key, value):
self._markers[key] = value
def __delitem__(self, key):
raise ValueError("cannot delete key in keywords dict")
def __iter__(self):
seen = set(self._markers)
if self.parent is not None:
seen.update(self.parent.keywords)
return iter(seen)
def __len__(self):
return len(self.__iter__())
def keys(self):
return list(self)
def __repr__(self):
return "<NodeKeywords for node %s>" % (self.node, )
class Node(object):
""" base class for Collector and Item the test collection tree.
Collector subclasses have children, Items are terminal nodes."""
def __init__(self, name, parent=None, config=None, session=None):
#: a unique name within the scope of the parent node
self.name = name
#: the parent collector node.
self.parent = parent
#: the pytest config object
self.config = config or parent.config
#: the session this node is part of
self.session = session or parent.session
#: filesystem path where this node was collected from (can be None)
self.fspath = getattr(parent, 'fspath', None)
#: keywords/markers collected from all scopes
self.keywords = NodeKeywords(self)
#: allow adding of extra keywords to use for matching
self.extra_keyword_matches = set()
# used for storing artificial fixturedefs for direct parametrization
self._name2pseudofixturedef = {}
@property
def ihook(self):
""" fspath sensitive hook proxy used to call pytest hooks"""
return self.session.gethookproxy(self.fspath)
Module = _CompatProperty("Module")
Class = _CompatProperty("Class")
Instance = _CompatProperty("Instance")
Function = _CompatProperty("Function")
File = _CompatProperty("File")
Item = _CompatProperty("Item")
def _getcustomclass(self, name):
maybe_compatprop = getattr(type(self), name)
if isinstance(maybe_compatprop, _CompatProperty):
return getattr(__import__('pytest'), name)
else:
cls = getattr(self, name)
# TODO: reenable in the features branch
# warnings.warn("use of node.%s is deprecated, "
# "use pytest_pycollect_makeitem(...) to create custom "
# "collection nodes" % name, category=DeprecationWarning)
return cls
def __repr__(self):
return "<%s %r>" % (self.__class__.__name__,
getattr(self, 'name', None))
def warn(self, code, message):
""" generate a warning with the given code and message for this
item. """
assert isinstance(code, str)
fslocation = getattr(self, "location", None)
if fslocation is None:
fslocation = getattr(self, "fspath", None)
self.ihook.pytest_logwarning.call_historic(kwargs=dict(
code=code, message=message,
nodeid=self.nodeid, fslocation=fslocation))
# methods for ordering nodes
@property
def nodeid(self):
""" a ::-separated string denoting its collection tree address. """
try:
return self._nodeid
except AttributeError:
self._nodeid = x = self._makeid()
return x
def _makeid(self):
return self.parent.nodeid + "::" + self.name
def __hash__(self):
return hash(self.nodeid)
def setup(self):
pass
def teardown(self):
pass
def listchain(self):
""" return list of all parent collectors up to self,
starting from root of collection tree. """
chain = []
item = self
while item is not None:
chain.append(item)
item = item.parent
chain.reverse()
return chain
def add_marker(self, marker):
""" dynamically add a marker object to the node.
``marker`` can be a string or pytest.mark.* instance.
"""
from _pytest.mark import MarkDecorator, MARK_GEN
if isinstance(marker, six.string_types):
marker = getattr(MARK_GEN, marker)
elif not isinstance(marker, MarkDecorator):
raise ValueError("is not a string or pytest.mark.* Marker")
self.keywords[marker.name] = marker
def get_marker(self, name):
""" get a marker object from this node or None if
the node doesn't have a marker with that name. """
val = self.keywords.get(name, None)
if val is not None:
from _pytest.mark import MarkInfo, MarkDecorator
if isinstance(val, (MarkDecorator, MarkInfo)):
return val
def listextrakeywords(self):
""" Return a set of all extra keywords in self and any parents."""
extra_keywords = set()
item = self
for item in self.listchain():
extra_keywords.update(item.extra_keyword_matches)
return extra_keywords
def listnames(self):
return [x.name for x in self.listchain()]
def addfinalizer(self, fin):
""" register a function to be called when this node is finalized.
This method can only be called when this node is active
in a setup chain, for example during self.setup().
"""
self.session._setupstate.addfinalizer(fin, self)
def getparent(self, cls):
""" get the next parent node (including ourself)
which is an instance of the given class"""
current = self
while current and not isinstance(current, cls):
current = current.parent
return current
def _prunetraceback(self, excinfo):
pass
def _repr_failure_py(self, excinfo, style=None):
fm = self.session._fixturemanager
if excinfo.errisinstance(fm.FixtureLookupError):
return excinfo.value.formatrepr()
tbfilter = True
if self.config.option.fulltrace:
style = "long"
else:
tb = _pytest._code.Traceback([excinfo.traceback[-1]])
self._prunetraceback(excinfo)
if len(excinfo.traceback) == 0:
excinfo.traceback = tb
tbfilter = False # prunetraceback already does it
if style == "auto":
style = "long"
# XXX should excinfo.getrepr record all data and toterminal() process it?
if style is None:
if self.config.option.tbstyle == "short":
style = "short"
else:
style = "long"
try:
os.getcwd()
abspath = False
except OSError:
abspath = True
return excinfo.getrepr(funcargs=True, abspath=abspath,
showlocals=self.config.option.showlocals,
style=style, tbfilter=tbfilter)
repr_failure = _repr_failure_py
class Collector(Node):
""" Collector instances create children through collect()
and thus iteratively build a tree.
"""
class CollectError(Exception):
""" an error during collection, contains a custom message. """
def collect(self):
""" returns a list of children (items and collectors)
for this collection node.
"""
raise NotImplementedError("abstract")
def repr_failure(self, excinfo):
""" represent a collection failure. """
if excinfo.errisinstance(self.CollectError):
exc = excinfo.value
return str(exc.args[0])
return self._repr_failure_py(excinfo, style="short")
def _prunetraceback(self, excinfo):
if hasattr(self, 'fspath'):
traceback = excinfo.traceback
ntraceback = traceback.cut(path=self.fspath)
if ntraceback == traceback:
ntraceback = ntraceback.cut(excludepath=tracebackcutdir)
excinfo.traceback = ntraceback.filter()
class FSCollector(Collector):
def __init__(self, fspath, parent=None, config=None, session=None):
fspath = py.path.local(fspath) # xxx only for test_resultlog.py?
name = fspath.basename
if parent is not None:
rel = fspath.relto(parent.fspath)
if rel:
name = rel
name = name.replace(os.sep, SEP)
super(FSCollector, self).__init__(name, parent, config, session)
self.fspath = fspath
def _check_initialpaths_for_relpath(self):
for initialpath in self.session._initialpaths:
if self.fspath.common(initialpath) == initialpath:
return self.fspath.relto(initialpath.dirname)
def _makeid(self):
relpath = self.fspath.relto(self.config.rootdir)
if not relpath:
relpath = self._check_initialpaths_for_relpath()
if os.sep != SEP:
relpath = relpath.replace(os.sep, SEP)
return relpath
class File(FSCollector):
""" base class for collecting tests from a file. """
class Item(Node):
""" a basic test invocation item. Note that for a single function
there might be multiple test invocation items.
"""
nextitem = None
def __init__(self, name, parent=None, config=None, session=None):
super(Item, self).__init__(name, parent, config, session)
self._report_sections = []
def add_report_section(self, when, key, content):
"""
Adds a new report section, similar to what's done internally to add stdout and
stderr captured output::
item.add_report_section("call", "stdout", "report section contents")
:param str when:
One of the possible capture states, ``"setup"``, ``"call"``, ``"teardown"``.
:param str key:
Name of the section, can be customized at will. Pytest uses ``"stdout"`` and
``"stderr"`` internally.
:param str content:
The full contents as a string.
"""
if content:
self._report_sections.append((when, key, content))
def reportinfo(self):
return self.fspath, None, ""
@property
def location(self):
try:
return self._location
except AttributeError:
location = self.reportinfo()
# bestrelpath is a quite slow function
cache = self.config.__dict__.setdefault("_bestrelpathcache", {})
try:
fspath = cache[location[0]]
except KeyError:
fspath = self.session.fspath.bestrelpath(location[0])
cache[location[0]] = fspath
location = (fspath, location[1], str(location[2]))
self._location = location
return location

View File

@ -171,7 +171,7 @@ def _pytest(request):
return PytestArg(request) return PytestArg(request)
class PytestArg: class PytestArg(object):
def __init__(self, request): def __init__(self, request):
self.request = request self.request = request
@ -186,7 +186,7 @@ def get_public_names(values):
return [x for x in values if x[0] != "_"] return [x for x in values if x[0] != "_"]
class ParsedCall: class ParsedCall(object):
def __init__(self, name, kwargs): def __init__(self, name, kwargs):
self.__dict__.update(kwargs) self.__dict__.update(kwargs)
self._name = name self._name = name
@ -197,7 +197,7 @@ class ParsedCall:
return "<ParsedCall %r(**%r)>" % (self._name, d) return "<ParsedCall %r(**%r)>" % (self._name, d)
class HookRecorder: class HookRecorder(object):
"""Record all hooks called in a plugin manager. """Record all hooks called in a plugin manager.
This wraps all the hook calls in the plugin manager, recording each call This wraps all the hook calls in the plugin manager, recording each call
@ -343,7 +343,7 @@ def testdir(request, tmpdir_factory):
rex_outcome = re.compile(r"(\d+) ([\w-]+)") rex_outcome = re.compile(r"(\d+) ([\w-]+)")
class RunResult: class RunResult(object):
"""The result of running a command. """The result of running a command.
Attributes: Attributes:
@ -397,7 +397,36 @@ class RunResult:
assert obtained == dict(passed=passed, skipped=skipped, failed=failed, error=error) assert obtained == dict(passed=passed, skipped=skipped, failed=failed, error=error)
class Testdir: class CwdSnapshot(object):
def __init__(self):
self.__saved = os.getcwd()
def restore(self):
os.chdir(self.__saved)
class SysModulesSnapshot(object):
def __init__(self, preserve=None):
self.__preserve = preserve
self.__saved = dict(sys.modules)
def restore(self):
if self.__preserve:
self.__saved.update(
(k, m) for k, m in sys.modules.items() if self.__preserve(k))
sys.modules.clear()
sys.modules.update(self.__saved)
class SysPathsSnapshot(object):
def __init__(self):
self.__saved = list(sys.path), list(sys.meta_path)
def restore(self):
sys.path[:], sys.meta_path[:] = self.__saved
class Testdir(object):
"""Temporary test directory with tools to test/run pytest itself. """Temporary test directory with tools to test/run pytest itself.
This is based on the ``tmpdir`` fixture but provides a number of methods This is based on the ``tmpdir`` fixture but provides a number of methods
@ -421,9 +450,10 @@ class Testdir:
name = request.function.__name__ name = request.function.__name__
self.tmpdir = tmpdir_factory.mktemp(name, numbered=True) self.tmpdir = tmpdir_factory.mktemp(name, numbered=True)
self.plugins = [] self.plugins = []
self._savesyspath = (list(sys.path), list(sys.meta_path)) self._cwd_snapshot = CwdSnapshot()
self._savemodulekeys = set(sys.modules) self._sys_path_snapshot = SysPathsSnapshot()
self.chdir() # always chdir self._sys_modules_snapshot = self.__take_sys_modules_snapshot()
self.chdir()
self.request.addfinalizer(self.finalize) self.request.addfinalizer(self.finalize)
method = self.request.config.getoption("--runpytest") method = self.request.config.getoption("--runpytest")
if method == "inprocess": if method == "inprocess":
@ -442,23 +472,17 @@ class Testdir:
it can be looked at after the test run has finished. it can be looked at after the test run has finished.
""" """
sys.path[:], sys.meta_path[:] = self._savesyspath self._sys_modules_snapshot.restore()
if hasattr(self, '_olddir'): self._sys_path_snapshot.restore()
self._olddir.chdir() self._cwd_snapshot.restore()
self.delete_loaded_modules()
def delete_loaded_modules(self): def __take_sys_modules_snapshot(self):
"""Delete modules that have been loaded during a test. # some zope modules used by twisted-related tests keep internal state
# and can't be deleted; we had some trouble in the past with
This allows the interpreter to catch module changes in case # `zope.interface` for example
the module is re-imported. def preserve_module(name):
""" return name.startswith("zope")
for name in set(sys.modules).difference(self._savemodulekeys): return SysModulesSnapshot(preserve=preserve_module)
# some zope modules used by twisted-related tests keeps internal
# state and can't be deleted; we had some trouble in the past
# with zope.interface for example
if not name.startswith("zope"):
del sys.modules[name]
def make_hook_recorder(self, pluginmanager): def make_hook_recorder(self, pluginmanager):
"""Create a new :py:class:`HookRecorder` for a PluginManager.""" """Create a new :py:class:`HookRecorder` for a PluginManager."""
@ -473,9 +497,7 @@ class Testdir:
This is done automatically upon instantiation. This is done automatically upon instantiation.
""" """
old = self.tmpdir.chdir() self.tmpdir.chdir()
if not hasattr(self, '_olddir'):
self._olddir = old
def _makefile(self, ext, args, kwargs, encoding='utf-8'): def _makefile(self, ext, args, kwargs, encoding='utf-8'):
items = list(kwargs.items()) items = list(kwargs.items())
@ -690,42 +712,58 @@ class Testdir:
:return: a :py:class:`HookRecorder` instance :return: a :py:class:`HookRecorder` instance
""" """
# When running py.test inline any plugins active in the main test finalizers = []
# process are already imported. So this disables the warning which try:
# will trigger to say they can no longer be rewritten, which is fine as # When running py.test inline any plugins active in the main test
# they have already been rewritten. # process are already imported. So this disables the warning which
orig_warn = AssertionRewritingHook._warn_already_imported # will trigger to say they can no longer be rewritten, which is
# fine as they have already been rewritten.
orig_warn = AssertionRewritingHook._warn_already_imported
def revert(): def revert_warn_already_imported():
AssertionRewritingHook._warn_already_imported = orig_warn AssertionRewritingHook._warn_already_imported = orig_warn
finalizers.append(revert_warn_already_imported)
AssertionRewritingHook._warn_already_imported = lambda *a: None
self.request.addfinalizer(revert) # Any sys.module or sys.path changes done while running py.test
AssertionRewritingHook._warn_already_imported = lambda *a: None # inline should be reverted after the test run completes to avoid
# clashing with later inline tests run within the same pytest test,
# e.g. just because they use matching test module names.
finalizers.append(self.__take_sys_modules_snapshot().restore)
finalizers.append(SysPathsSnapshot().restore)
rec = [] # Important note:
# - our tests should not leave any other references/registrations
# laying around other than possibly loaded test modules
# referenced from sys.modules, as nothing will clean those up
# automatically
class Collect: rec = []
def pytest_configure(x, config):
rec.append(self.make_hook_recorder(config.pluginmanager))
plugins = kwargs.get("plugins") or [] class Collect(object):
plugins.append(Collect()) def pytest_configure(x, config):
ret = pytest.main(list(args), plugins=plugins) rec.append(self.make_hook_recorder(config.pluginmanager))
self.delete_loaded_modules()
if len(rec) == 1:
reprec = rec.pop()
else:
class reprec:
pass
reprec.ret = ret
# typically we reraise keyboard interrupts from the child run because plugins = kwargs.get("plugins") or []
# it's our user requesting interruption of the testing plugins.append(Collect())
if ret == 2 and not kwargs.get("no_reraise_ctrlc"): ret = pytest.main(list(args), plugins=plugins)
calls = reprec.getcalls("pytest_keyboard_interrupt") if len(rec) == 1:
if calls and calls[-1].excinfo.type == KeyboardInterrupt: reprec = rec.pop()
raise KeyboardInterrupt() else:
return reprec class reprec(object):
pass
reprec.ret = ret
# typically we reraise keyboard interrupts from the child run
# because it's our user requesting interruption of the testing
if ret == 2 and not kwargs.get("no_reraise_ctrlc"):
calls = reprec.getcalls("pytest_keyboard_interrupt")
if calls and calls[-1].excinfo.type == KeyboardInterrupt:
raise KeyboardInterrupt()
return reprec
finally:
for finalizer in finalizers:
finalizer()
def runpytest_inprocess(self, *args, **kwargs): def runpytest_inprocess(self, *args, **kwargs):
"""Return result of running pytest in-process, providing a similar """Return result of running pytest in-process, providing a similar
@ -742,13 +780,13 @@ class Testdir:
reprec = self.inline_run(*args, **kwargs) reprec = self.inline_run(*args, **kwargs)
except SystemExit as e: except SystemExit as e:
class reprec: class reprec(object):
ret = e.args[0] ret = e.args[0]
except Exception: except Exception:
traceback.print_exc() traceback.print_exc()
class reprec: class reprec(object):
ret = 3 ret = 3
finally: finally:
out, err = capture.readouterr() out, err = capture.readouterr()
@ -1029,7 +1067,7 @@ def getdecoded(out):
py.io.saferepr(out),) py.io.saferepr(out),)
class LineComp: class LineComp(object):
def __init__(self): def __init__(self):
self.stringio = py.io.TextIO() self.stringio = py.io.TextIO()
@ -1047,7 +1085,7 @@ class LineComp:
return LineMatcher(lines1).fnmatch_lines(lines2) return LineMatcher(lines1).fnmatch_lines(lines2)
class LineMatcher: class LineMatcher(object):
"""Flexible matching of text. """Flexible matching of text.
This is a convenience class to test large texts like the output of This is a convenience class to test large texts like the output of

View File

@ -19,7 +19,7 @@ from _pytest.config import hookimpl
import _pytest import _pytest
import pluggy import pluggy
from _pytest import fixtures from _pytest import fixtures
from _pytest import main from _pytest import nodes
from _pytest import deprecated from _pytest import deprecated
from _pytest.compat import ( from _pytest.compat import (
isclass, isfunction, is_generator, ascii_escaped, isclass, isfunction, is_generator, ascii_escaped,
@ -269,7 +269,7 @@ class PyobjMixin(PyobjContext):
return fspath, lineno, modpath return fspath, lineno, modpath
class PyCollector(PyobjMixin, main.Collector): class PyCollector(PyobjMixin, nodes.Collector):
def funcnamefilter(self, name): def funcnamefilter(self, name):
return self._matches_prefix_or_glob_option('python_functions', name) return self._matches_prefix_or_glob_option('python_functions', name)
@ -394,7 +394,7 @@ class PyCollector(PyobjMixin, main.Collector):
) )
class Module(main.File, PyCollector): class Module(nodes.File, PyCollector):
""" Collector for test classes and functions. """ """ Collector for test classes and functions. """
def _getobj(self): def _getobj(self):
@ -785,6 +785,7 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
from _pytest.fixtures import scope2index from _pytest.fixtures import scope2index
from _pytest.mark import ParameterSet from _pytest.mark import ParameterSet
from py.io import saferepr from py.io import saferepr
argnames, parameters = ParameterSet._for_parametrize( argnames, parameters = ParameterSet._for_parametrize(
argnames, argvalues, self.function, self.config) argnames, argvalues, self.function, self.config)
del argvalues del argvalues
@ -940,7 +941,7 @@ def _idval(val, argname, idx, idfn, config=None):
return ascii_escaped(val.pattern) return ascii_escaped(val.pattern)
elif enum is not None and isinstance(val, enum.Enum): elif enum is not None and isinstance(val, enum.Enum):
return str(val) return str(val)
elif isclass(val) and hasattr(val, '__name__'): elif (isclass(val) or isfunction(val)) and hasattr(val, '__name__'):
return val.__name__ return val.__name__
return str(argname) + str(idx) return str(argname) + str(idx)
@ -1097,7 +1098,7 @@ def write_docstring(tw, doc):
tw.write(INDENT + line + "\n") tw.write(INDENT + line + "\n")
class Function(FunctionMixin, main.Item, fixtures.FuncargnamesCompatAttr): class Function(FunctionMixin, nodes.Item, fixtures.FuncargnamesCompatAttr):
""" a Function Item is responsible for setting up and executing a """ a Function Item is responsible for setting up and executing a
Python test function. Python test function.
""" """

View File

@ -60,6 +60,9 @@ def pytest_runtest_protocol(item, nextitem):
nodeid=item.nodeid, location=item.location, nodeid=item.nodeid, location=item.location,
) )
runtestprotocol(item, nextitem=nextitem) runtestprotocol(item, nextitem=nextitem)
item.ihook.pytest_runtest_logfinish(
nodeid=item.nodeid, location=item.location,
)
return True return True
@ -175,7 +178,7 @@ def call_runtest_hook(item, when, **kwds):
return CallInfo(lambda: ihook(item=item, **kwds), when=when) return CallInfo(lambda: ihook(item=item, **kwds), when=when)
class CallInfo: class CallInfo(object):
""" Result/Exception info a function invocation. """ """ Result/Exception info a function invocation. """
#: None or ExceptionInfo object. #: None or ExceptionInfo object.
excinfo = None excinfo = None

View File

@ -94,7 +94,7 @@ def pytest_report_teststatus(report):
return report.outcome, letter, report.outcome.upper() return report.outcome, letter, report.outcome.upper()
class WarningReport: class WarningReport(object):
""" """
Simple structure to hold warnings information captured by ``pytest_logwarning``. Simple structure to hold warnings information captured by ``pytest_logwarning``.
""" """
@ -129,7 +129,7 @@ class WarningReport:
return None return None
class TerminalReporter: class TerminalReporter(object):
def __init__(self, config, file=None): def __init__(self, config, file=None):
import _pytest.config import _pytest.config
self.config = config self.config = config
@ -152,8 +152,18 @@ class TerminalReporter:
self.reportchars = getreportopt(config) self.reportchars = getreportopt(config)
self.hasmarkup = self._tw.hasmarkup self.hasmarkup = self._tw.hasmarkup
self.isatty = file.isatty() self.isatty = file.isatty()
self._progress_items_reported = 0 self._progress_nodeids_reported = set()
self._show_progress_info = self.config.getini('console_output_style') == 'progress' self._show_progress_info = self._determine_show_progress_info()
def _determine_show_progress_info(self):
"""Return True if we should display progress information based on the current config"""
# do not show progress if we are not capturing output (#3038)
if self.config.getoption('capture') == 'no':
return False
# do not show progress if we are showing fixture setup/teardown
if self.config.getoption('setupshow'):
return False
return self.config.getini('console_output_style') == 'progress'
def hasopt(self, char): def hasopt(self, char):
char = {'xfailed': 'x', 'skipped': 's'}.get(char, char) char = {'xfailed': 'x', 'skipped': 's'}.get(char, char)
@ -178,7 +188,6 @@ class TerminalReporter:
if extra: if extra:
self._tw.write(extra, **kwargs) self._tw.write(extra, **kwargs)
self.currentfspath = -2 self.currentfspath = -2
self._write_progress_information_filling_space()
def ensure_newline(self): def ensure_newline(self):
if self.currentfspath: if self.currentfspath:
@ -268,14 +277,13 @@ class TerminalReporter:
# probably passed setup/teardown # probably passed setup/teardown
return return
running_xdist = hasattr(rep, 'node') running_xdist = hasattr(rep, 'node')
self._progress_items_reported += 1
if self.verbosity <= 0: if self.verbosity <= 0:
if not running_xdist and self.showfspath: if not running_xdist and self.showfspath:
self.write_fspath_result(rep.nodeid, letter) self.write_fspath_result(rep.nodeid, letter)
else: else:
self._tw.write(letter) self._tw.write(letter)
self._write_progress_if_past_edge()
else: else:
self._progress_nodeids_reported.add(rep.nodeid)
if markup is None: if markup is None:
if rep.passed: if rep.passed:
markup = {'green': True} markup = {'green': True}
@ -288,6 +296,8 @@ class TerminalReporter:
line = self._locationline(rep.nodeid, *rep.location) line = self._locationline(rep.nodeid, *rep.location)
if not running_xdist: if not running_xdist:
self.write_ensure_prefix(line, word, **markup) self.write_ensure_prefix(line, word, **markup)
if self._show_progress_info:
self._write_progress_information_filling_space()
else: else:
self.ensure_newline() self.ensure_newline()
self._tw.write("[%s]" % rep.node.gateway.id) self._tw.write("[%s]" % rep.node.gateway.id)
@ -299,31 +309,28 @@ class TerminalReporter:
self._tw.write(" " + line) self._tw.write(" " + line)
self.currentfspath = -2 self.currentfspath = -2
def _write_progress_if_past_edge(self): def pytest_runtest_logfinish(self, nodeid):
if not self._show_progress_info: if self.verbosity <= 0 and self._show_progress_info:
return self._progress_nodeids_reported.add(nodeid)
last_item = self._progress_items_reported == self._session.testscollected last_item = len(self._progress_nodeids_reported) == self._session.testscollected
if last_item: if last_item:
self._write_progress_information_filling_space() self._write_progress_information_filling_space()
return else:
past_edge = self._tw.chars_on_current_line + self._PROGRESS_LENGTH + 1 >= self._screen_width
past_edge = self._tw.chars_on_current_line + self._PROGRESS_LENGTH + 1 >= self._screen_width if past_edge:
if past_edge: msg = self._get_progress_information_message()
msg = self._get_progress_information_message() self._tw.write(msg + '\n', cyan=True)
self._tw.write(msg + '\n', cyan=True)
_PROGRESS_LENGTH = len(' [100%]') _PROGRESS_LENGTH = len(' [100%]')
def _get_progress_information_message(self): def _get_progress_information_message(self):
collected = self._session.testscollected collected = self._session.testscollected
if collected: if collected:
progress = self._progress_items_reported * 100 // collected progress = len(self._progress_nodeids_reported) * 100 // collected
return ' [{:3d}%]'.format(progress) return ' [{:3d}%]'.format(progress)
return ' [100%]' return ' [100%]'
def _write_progress_information_filling_space(self): def _write_progress_information_filling_space(self):
if not self._show_progress_info:
return
msg = self._get_progress_information_message() msg = self._get_progress_information_message()
fill = ' ' * (self._tw.fullwidth - self._tw.chars_on_current_line - len(msg) - 1) fill = ' ' * (self._tw.fullwidth - self._tw.chars_on_current_line - len(msg) - 1)
self.write(fill + msg, cyan=True) self.write(fill + msg, cyan=True)

View File

@ -8,7 +8,7 @@ import py
from _pytest.monkeypatch import MonkeyPatch from _pytest.monkeypatch import MonkeyPatch
class TempdirFactory: class TempdirFactory(object):
"""Factory for temporary directories under the common base temp directory. """Factory for temporary directories under the common base temp directory.
The base directory can be configured using the ``--basetemp`` option. The base directory can be configured using the ``--basetemp`` option.

View File

@ -1 +0,0 @@
Fixed hanging pexpect test on MacOS by using flush() instead of wait().

View File

@ -1 +0,0 @@
Document hooks (defined with ``historic=True``) which cannot be used with ``hookwrapper=True``.

View File

@ -1 +0,0 @@
Clarify that warning capturing doesn't change the warning filter by default.

View File

@ -1 +0,0 @@
Clarify a possible confusion when using pytest_fixture_setup with fixture functions that return None.

View File

@ -1 +0,0 @@
Replace py.std with stdlib imports.

View File

@ -1 +0,0 @@
Fix skipping plugin reporting hook when test aborted before plugin setup hook.

View File

@ -1 +0,0 @@
Fix the wording of a sentence on doctest flags use in pytest.

View File

@ -1 +0,0 @@
Prefer ``https://*.readthedocs.io`` over ``http://*.rtfd.org`` for links in the documentation.

View File

@ -1 +0,0 @@
Corrected 'you' to 'your' in logging docs.

View File

@ -1 +0,0 @@
Improve readability (wording, grammar) of Getting Started guide

View File

@ -1 +0,0 @@
Added note that calling pytest.main multiple times from the same process is not recommended because of import caching.

View File

@ -6,6 +6,7 @@ Release announcements
:maxdepth: 2 :maxdepth: 2
release-3.4.0
release-3.3.2 release-3.3.2
release-3.3.1 release-3.3.1
release-3.3.0 release-3.3.0

View File

@ -0,0 +1,52 @@
pytest-3.4.0
=======================================
The pytest team is proud to announce the 3.4.0 release!
pytest is a mature Python testing tool with more than a 1600 tests
against itself, passing on many different interpreters and platforms.
This release contains a number of bugs fixes and improvements, so users are encouraged
to take a look at the CHANGELOG:
http://doc.pytest.org/en/latest/changelog.html
For complete documentation, please visit:
http://docs.pytest.org
As usual, you can upgrade from pypi via:
pip install -U pytest
Thanks to all who contributed to this release, among them:
* Aaron
* Alan Velasco
* Anders Hovmöller
* Andrew Toolan
* Anthony Sottile
* Aron Coyle
* Brian Maissy
* Bruno Oliveira
* Cyrus Maden
* Florian Bruhin
* Henk-Jaap Wagenaar
* Ian Lesperance
* Jon Dufresne
* Jurko Gospodnetić
* Kate
* Kimberly
* Per A. Brodtkorb
* Pierre-Alexandre Fonta
* Raphael Castaneda
* Ronny Pfannschmidt
* ST John
* Segev Finer
* Thomas Hisch
* Tzu-ping Chung
* feuillemorte
Happy testing,
The Pytest Development Team

View File

@ -116,6 +116,10 @@ You can ask for available builtin or project-custom
Add extra xml properties to the tag for the calling test. Add extra xml properties to the tag for the calling test.
The fixture is callable with ``(name, value)``, with value being automatically The fixture is callable with ``(name, value)``, with value being automatically
xml-encoded. xml-encoded.
record_xml_attribute
Add extra xml attributes to the tag for the calling test.
The fixture is callable with ``(name, value)``, with value being automatically
xml-encoded
caplog caplog
Access and control log capturing. Access and control log capturing.

View File

@ -225,7 +225,7 @@ You can always peek at the content of the cache using the
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
cachedir: $REGENDOC_TMPDIR/.cache cachedir: $REGENDOC_TMPDIR/.pytest_cache
------------------------------- cache values ------------------------------- ------------------------------- cache values -------------------------------
cache/lastfailed contains: cache/lastfailed contains:
{'test_caching.py::test_function': True} {'test_caching.py::test_function': True}

View File

@ -152,11 +152,25 @@ above will show verbose output because ``-v`` overwrites ``-q``.
Builtin configuration file options Builtin configuration file options
---------------------------------------------- ----------------------------------------------
Here is a list of builtin configuration options that may be written in a ``pytest.ini``, ``tox.ini`` or ``setup.cfg``
file, usually located at the root of your repository. All options must be under a ``[pytest]`` section
(``[tool:pytest]`` for ``setup.cfg`` files).
Configuration file options may be overwritten in the command-line by using ``-o/--override``, which can also be
passed multiple times. The expected format is ``name=value``. For example::
pytest -o console_output_style=classic -o cache_dir=/tmp/mycache
.. confval:: minversion .. confval:: minversion
Specifies a minimal pytest version required for running tests. Specifies a minimal pytest version required for running tests.
minversion = 2.1 # will fail if we run with pytest-2.0 .. code-block:: ini
# content of pytest.ini
[pytest]
minversion = 3.0 # will fail if we run with pytest-2.8
.. confval:: addopts .. confval:: addopts
@ -165,6 +179,7 @@ Builtin configuration file options
.. code-block:: ini .. code-block:: ini
# content of pytest.ini
[pytest] [pytest]
addopts = --maxfail=2 -rf # exit after 2 failures, report fail info addopts = --maxfail=2 -rf # exit after 2 failures, report fail info
@ -331,3 +346,28 @@ Builtin configuration file options
# content of pytest.ini # content of pytest.ini
[pytest] [pytest]
console_output_style = classic console_output_style = classic
.. confval:: empty_parameter_set_mark
.. versionadded:: 3.4
Allows to pick the action for empty parametersets in parameterization
* ``skip`` skips tests with a empty parameterset (default)
* ``xfail`` marks tests with a empty parameterset as xfail(run=False)
.. code-block:: ini
# content of pytest.ini
[pytest]
empty_parameter_set_mark = xfail
.. note::
The default value of this option is planned to change to ``xfail`` in future releases
as this is considered less error prone, see `#3155`_ for more details.
.. _`#3155`: https://github.com/pytest-dev/pytest/issues/3155

View File

@ -157,6 +157,8 @@ class TestRaises(object):
# thanks to Matthew Scott for this test # thanks to Matthew Scott for this test
def test_dynamic_compile_shows_nicely(): def test_dynamic_compile_shows_nicely():
import imp
import sys
src = 'def foo():\n assert 1 == 0\n' src = 'def foo():\n assert 1 == 0\n'
name = 'abc-123' name = 'abc-123'
module = imp.new_module(name) module = imp.new_module(name)

View File

@ -32,7 +32,7 @@ You can then restrict a test run to only run tests marked with ``webtest``::
$ pytest -v -m webtest $ pytest -v -m webtest
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
@ -46,7 +46,7 @@ Or the inverse, running all tests except the webtest ones::
$ pytest -v -m "not webtest" $ pytest -v -m "not webtest"
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
@ -67,7 +67,7 @@ tests based on their module, class, method, or function name::
$ pytest -v test_server.py::TestClass::test_method $ pytest -v test_server.py::TestClass::test_method
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 1 item collecting ... collected 1 item
@ -80,7 +80,7 @@ You can also select on the class::
$ pytest -v test_server.py::TestClass $ pytest -v test_server.py::TestClass
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 1 item collecting ... collected 1 item
@ -93,7 +93,7 @@ Or select multiple nodes::
$ pytest -v test_server.py::TestClass test_server.py::test_send_http $ pytest -v test_server.py::TestClass test_server.py::test_send_http
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 2 items collecting ... collected 2 items
@ -131,7 +131,7 @@ select tests based on their names::
$ pytest -v -k http # running with the above defined example module $ pytest -v -k http # running with the above defined example module
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
@ -145,7 +145,7 @@ And you can also run all tests except the ones that match the keyword::
$ pytest -k "not send_http" -v $ pytest -k "not send_http" -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
@ -161,7 +161,7 @@ Or to select "http" and "quick" tests::
$ pytest -k "http or quick" -v $ pytest -k "http or quick" -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 4 items collecting ... collected 4 items
@ -432,7 +432,7 @@ The output is as follows::
$ pytest -q -s $ pytest -q -s
Marker info name=my_marker args=(<function hello_world at 0xdeadbeef>,) kwars={} Marker info name=my_marker args=(<function hello_world at 0xdeadbeef>,) kwars={}
. [100%] .
1 passed in 0.12 seconds 1 passed in 0.12 seconds
We can see that the custom marker has its argument set extended with the function ``hello_world``. This is the key difference between creating a custom marker as a callable, which invokes ``__call__`` behind the scenes, and using ``with_args``. We can see that the custom marker has its argument set extended with the function ``hello_world``. This is the key difference between creating a custom marker as a callable, which invokes ``__call__`` behind the scenes, and using ``with_args``.
@ -477,7 +477,7 @@ Let's run this without capturing output and see what we get::
glob args=('function',) kwargs={'x': 3} glob args=('function',) kwargs={'x': 3}
glob args=('class',) kwargs={'x': 2} glob args=('class',) kwargs={'x': 2}
glob args=('module',) kwargs={'x': 1} glob args=('module',) kwargs={'x': 1}
. [100%] .
1 passed in 0.12 seconds 1 passed in 0.12 seconds
marking platform specific tests with pytest marking platform specific tests with pytest

View File

@ -60,7 +60,7 @@ consulted when reporting in ``verbose`` mode::
nonpython $ pytest -v nonpython $ pytest -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR/nonpython, inifile: rootdir: $REGENDOC_TMPDIR/nonpython, inifile:
collecting ... collected 2 items collecting ... collected 2 items

View File

@ -411,6 +411,8 @@ get on the terminal - we are working on that)::
____________________ test_dynamic_compile_shows_nicely _____________________ ____________________ test_dynamic_compile_shows_nicely _____________________
def test_dynamic_compile_shows_nicely(): def test_dynamic_compile_shows_nicely():
import imp
import sys
src = 'def foo():\n assert 1 == 0\n' src = 'def foo():\n assert 1 == 0\n'
name = 'abc-123' name = 'abc-123'
module = imp.new_module(name) module = imp.new_module(name)
@ -419,14 +421,14 @@ get on the terminal - we are working on that)::
sys.modules[name] = module sys.modules[name] = module
> module.foo() > module.foo()
failure_demo.py:166: failure_demo.py:168:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def foo(): def foo():
> assert 1 == 0 > assert 1 == 0
E AssertionError E AssertionError
<2-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:163>:2: AssertionError <2-codegen 'abc-123' $REGENDOC_TMPDIR/assertion/failure_demo.py:165>:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________ ____________________ TestMoreErrors.test_complex_error _____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -438,7 +440,7 @@ get on the terminal - we are working on that)::
return 43 return 43
> somefunc(f(), g()) > somefunc(f(), g())
failure_demo.py:176: failure_demo.py:178:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:9: in somefunc failure_demo.py:9: in somefunc
otherfunc(x,y) otherfunc(x,y)
@ -460,7 +462,7 @@ get on the terminal - we are working on that)::
> a,b = l > a,b = l
E ValueError: not enough values to unpack (expected 2, got 0) E ValueError: not enough values to unpack (expected 2, got 0)
failure_demo.py:180: ValueError failure_demo.py:182: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________ ____________________ TestMoreErrors.test_z2_type_error _____________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -470,7 +472,7 @@ get on the terminal - we are working on that)::
> a,b = l > a,b = l
E TypeError: 'int' object is not iterable E TypeError: 'int' object is not iterable
failure_demo.py:184: TypeError failure_demo.py:186: TypeError
______________________ TestMoreErrors.test_startswith ______________________ ______________________ TestMoreErrors.test_startswith ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -483,7 +485,7 @@ get on the terminal - we are working on that)::
E + where False = <built-in method startswith of str object at 0xdeadbeef>('456') E + where False = <built-in method startswith of str object at 0xdeadbeef>('456')
E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith E + where <built-in method startswith of str object at 0xdeadbeef> = '123'.startswith
failure_demo.py:189: AssertionError failure_demo.py:191: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________ __________________ TestMoreErrors.test_startswith_nested ___________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -500,7 +502,7 @@ get on the terminal - we are working on that)::
E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0xdeadbeef>() E + where '123' = <function TestMoreErrors.test_startswith_nested.<locals>.f at 0xdeadbeef>()
E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef>() E + and '456' = <function TestMoreErrors.test_startswith_nested.<locals>.g at 0xdeadbeef>()
failure_demo.py:196: AssertionError failure_demo.py:198: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________ _____________________ TestMoreErrors.test_global_func ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -511,7 +513,7 @@ get on the terminal - we are working on that)::
E + where False = isinstance(43, float) E + where False = isinstance(43, float)
E + where 43 = globf(42) E + where 43 = globf(42)
failure_demo.py:199: AssertionError failure_demo.py:201: AssertionError
_______________________ TestMoreErrors.test_instance _______________________ _______________________ TestMoreErrors.test_instance _______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -522,7 +524,7 @@ get on the terminal - we are working on that)::
E assert 42 != 42 E assert 42 != 42
E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x E + where 42 = <failure_demo.TestMoreErrors object at 0xdeadbeef>.x
failure_demo.py:203: AssertionError failure_demo.py:205: AssertionError
_______________________ TestMoreErrors.test_compare ________________________ _______________________ TestMoreErrors.test_compare ________________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -532,7 +534,7 @@ get on the terminal - we are working on that)::
E assert 11 < 5 E assert 11 < 5
E + where 11 = globf(10) E + where 11 = globf(10)
failure_demo.py:206: AssertionError failure_demo.py:208: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________ _____________________ TestMoreErrors.test_try_finally ______________________
self = <failure_demo.TestMoreErrors object at 0xdeadbeef> self = <failure_demo.TestMoreErrors object at 0xdeadbeef>
@ -543,7 +545,7 @@ get on the terminal - we are working on that)::
> assert x == 0 > assert x == 0
E assert 1 == 0 E assert 1 == 0
failure_demo.py:211: AssertionError failure_demo.py:213: AssertionError
___________________ TestCustomAssertMsg.test_single_line ___________________ ___________________ TestCustomAssertMsg.test_single_line ___________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef> self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
@ -557,7 +559,7 @@ get on the terminal - we are working on that)::
E assert 1 == 2 E assert 1 == 2
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line.<locals>.A'>.a E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_single_line.<locals>.A'>.a
failure_demo.py:222: AssertionError failure_demo.py:224: AssertionError
____________________ TestCustomAssertMsg.test_multiline ____________________ ____________________ TestCustomAssertMsg.test_multiline ____________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef> self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
@ -574,7 +576,7 @@ get on the terminal - we are working on that)::
E assert 1 == 2 E assert 1 == 2
E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline.<locals>.A'>.a E + where 1 = <class 'failure_demo.TestCustomAssertMsg.test_multiline.<locals>.A'>.a
failure_demo.py:228: AssertionError failure_demo.py:230: AssertionError
___________________ TestCustomAssertMsg.test_custom_repr ___________________ ___________________ TestCustomAssertMsg.test_custom_repr ___________________
self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef> self = <failure_demo.TestCustomAssertMsg object at 0xdeadbeef>
@ -594,7 +596,7 @@ get on the terminal - we are working on that)::
E assert 1 == 2 E assert 1 == 2
E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a E + where 1 = This is JSON\n{\n 'foo': 'bar'\n}.a
failure_demo.py:238: AssertionError failure_demo.py:240: AssertionError
============================= warnings summary ============================= ============================= warnings summary =============================
None None
Metafunc.addcall is deprecated and scheduled to be removed in pytest 4.0. Metafunc.addcall is deprecated and scheduled to be removed in pytest 4.0.

View File

@ -332,7 +332,7 @@ which will add info only when run with "--v"::
$ pytest -v $ pytest -v
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .pytest_cache
info1: did you know that ... info1: did you know that ...
did you? did you?
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
@ -385,9 +385,9 @@ Now we can profile which test functions execute the slowest::
test_some_are_slow.py ... [100%] test_some_are_slow.py ... [100%]
========================= slowest 3 test durations ========================= ========================= slowest 3 test durations =========================
0.31s call test_some_are_slow.py::test_funcslow2 0.58s call test_some_are_slow.py::test_funcslow2
0.20s call test_some_are_slow.py::test_funcslow1 0.41s call test_some_are_slow.py::test_funcslow1
0.17s call test_some_are_slow.py::test_funcfast 0.10s call test_some_are_slow.py::test_funcfast
========================= 3 passed in 0.12 seconds ========================= ========================= 3 passed in 0.12 seconds =========================
incremental testing - test steps incremental testing - test steps
@ -537,7 +537,7 @@ We can run this::
file $REGENDOC_TMPDIR/b/test_error.py, line 1 file $REGENDOC_TMPDIR/b/test_error.py, line 1
def test_root(db): # no db here, will error out def test_root(db): # no db here, will error out
E fixture 'db' not found E fixture 'db' not found
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_xml_property, recwarn, tmpdir, tmpdir_factory > available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_xml_attribute, record_xml_property, recwarn, tmpdir, tmpdir_factory
> use 'pytest --fixtures [testpath]' for help on them. > use 'pytest --fixtures [testpath]' for help on them.
$REGENDOC_TMPDIR/b/test_error.py:1 $REGENDOC_TMPDIR/b/test_error.py:1
@ -731,7 +731,7 @@ and run it::
test_module.py Esetting up a test failed! test_module.py::test_setup_fails test_module.py Esetting up a test failed! test_module.py::test_setup_fails
Fexecuting test failed test_module.py::test_call_fails Fexecuting test failed test_module.py::test_call_fails
F [100%] F
================================== ERRORS ================================== ================================== ERRORS ==================================
____________________ ERROR at setup of test_setup_fails ____________________ ____________________ ERROR at setup of test_setup_fails ____________________

View File

@ -68,5 +68,5 @@ If you run this without output capturing::
.test_method1 called .test_method1 called
.test other .test other
.test_unit1 method called .test_unit1 method called
. [100%] .
4 passed in 0.12 seconds 4 passed in 0.12 seconds

View File

@ -286,7 +286,7 @@ tests.
Let's execute it:: Let's execute it::
$ pytest -s -q --tb=no $ pytest -s -q --tb=no
FF [100%]teardown smtp FFteardown smtp
2 failed in 0.12 seconds 2 failed in 0.12 seconds
@ -391,7 +391,7 @@ We use the ``request.module`` attribute to optionally obtain an
again, nothing much has changed:: again, nothing much has changed::
$ pytest -s -q --tb=no $ pytest -s -q --tb=no
FF [100%]finalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com) FFfinalizing <smtplib.SMTP object at 0xdeadbeef> (smtp.gmail.com)
2 failed in 0.12 seconds 2 failed in 0.12 seconds
@ -612,7 +612,7 @@ Here we declare an ``app`` fixture which receives the previously defined
$ pytest -v test_appsetup.py $ pytest -v test_appsetup.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 2 items collecting ... collected 2 items
@ -681,40 +681,40 @@ Let's run the tests in verbose mode and with looking at the print-output::
$ pytest -v -s test_module.py $ pytest -v -s test_module.py
=========================== test session starts ============================ =========================== test session starts ============================
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5 platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
cachedir: .cache cachedir: .pytest_cache
rootdir: $REGENDOC_TMPDIR, inifile: rootdir: $REGENDOC_TMPDIR, inifile:
collecting ... collected 8 items collecting ... collected 8 items
test_module.py::test_0[1] SETUP otherarg 1 test_module.py::test_0[1] SETUP otherarg 1
RUN test0 with otherarg 1 RUN test0 with otherarg 1
PASSED [ 12%] TEARDOWN otherarg 1 PASSED TEARDOWN otherarg 1
test_module.py::test_0[2] SETUP otherarg 2 test_module.py::test_0[2] SETUP otherarg 2
RUN test0 with otherarg 2 RUN test0 with otherarg 2
PASSED [ 25%] TEARDOWN otherarg 2 PASSED TEARDOWN otherarg 2
test_module.py::test_1[mod1] SETUP modarg mod1 test_module.py::test_1[mod1] SETUP modarg mod1
RUN test1 with modarg mod1 RUN test1 with modarg mod1
PASSED [ 37%] PASSED
test_module.py::test_2[1-mod1] SETUP otherarg 1 test_module.py::test_2[1-mod1] SETUP otherarg 1
RUN test2 with otherarg 1 and modarg mod1 RUN test2 with otherarg 1 and modarg mod1
PASSED [ 50%] TEARDOWN otherarg 1 PASSED TEARDOWN otherarg 1
test_module.py::test_2[2-mod1] SETUP otherarg 2 test_module.py::test_2[2-mod1] SETUP otherarg 2
RUN test2 with otherarg 2 and modarg mod1 RUN test2 with otherarg 2 and modarg mod1
PASSED [ 62%] TEARDOWN otherarg 2 PASSED TEARDOWN otherarg 2
test_module.py::test_1[mod2] TEARDOWN modarg mod1 test_module.py::test_1[mod2] TEARDOWN modarg mod1
SETUP modarg mod2 SETUP modarg mod2
RUN test1 with modarg mod2 RUN test1 with modarg mod2
PASSED [ 75%] PASSED
test_module.py::test_2[1-mod2] SETUP otherarg 1 test_module.py::test_2[1-mod2] SETUP otherarg 1
RUN test2 with otherarg 1 and modarg mod2 RUN test2 with otherarg 1 and modarg mod2
PASSED [ 87%] TEARDOWN otherarg 1 PASSED TEARDOWN otherarg 1
test_module.py::test_2[2-mod2] SETUP otherarg 2 test_module.py::test_2[2-mod2] SETUP otherarg 2
RUN test2 with otherarg 2 and modarg mod2 RUN test2 with otherarg 2 and modarg mod2
PASSED [100%] TEARDOWN otherarg 2 PASSED TEARDOWN otherarg 2
TEARDOWN modarg mod2 TEARDOWN modarg mod2

View File

@ -3,24 +3,11 @@
Logging Logging
------- -------
.. versionadded 3.3.0 .. versionadded:: 3.3
.. versionchanged:: 3.4
.. note:: pytest captures log messages of level ``WARNING`` or above automatically and displays them in their own section
for each failed test in the same manner as captured stdout and stderr.
This feature is a drop-in replacement for the `pytest-catchlog
<https://pypi.org/project/pytest-catchlog/>`_ plugin and they will conflict
with each other. The backward compatibility API with ``pytest-capturelog``
has been dropped when this feature was introduced, so if for that reason you
still need ``pytest-catchlog`` you can disable the internal feature by
adding to your ``pytest.ini``:
.. code-block:: ini
[pytest]
addopts=-p no:logging
Log messages are captured by default and for each failed test will be shown in
the same manner as captured stdout and stderr.
Running without options:: Running without options::
@ -29,7 +16,7 @@ Running without options::
Shows failed tests like so:: Shows failed tests like so::
----------------------- Captured stdlog call ---------------------- ----------------------- Captured stdlog call ----------------------
test_reporting.py 26 INFO text going to logger test_reporting.py 26 WARNING text going to logger
----------------------- Captured stdout call ---------------------- ----------------------- Captured stdout call ----------------------
text going to stdout text going to stdout
----------------------- Captured stderr call ---------------------- ----------------------- Captured stderr call ----------------------
@ -37,11 +24,10 @@ Shows failed tests like so::
==================== 2 failed in 0.02 seconds ===================== ==================== 2 failed in 0.02 seconds =====================
By default each captured log message shows the module, line number, log level By default each captured log message shows the module, line number, log level
and message. Showing the exact module and line number is useful for testing and and message.
debugging. If desired the log format and date format can be specified to
anything that the logging module supports.
Running pytest specifying formatting options:: If desired the log and date format can be specified to
anything that the logging module supports by passing specific formatting options::
pytest --log-format="%(asctime)s %(levelname)s %(message)s" \ pytest --log-format="%(asctime)s %(levelname)s %(message)s" \
--log-date-format="%Y-%m-%d %H:%M:%S" --log-date-format="%Y-%m-%d %H:%M:%S"
@ -49,14 +35,14 @@ Running pytest specifying formatting options::
Shows failed tests like so:: Shows failed tests like so::
----------------------- Captured stdlog call ---------------------- ----------------------- Captured stdlog call ----------------------
2010-04-10 14:48:44 INFO text going to logger 2010-04-10 14:48:44 WARNING text going to logger
----------------------- Captured stdout call ---------------------- ----------------------- Captured stdout call ----------------------
text going to stdout text going to stdout
----------------------- Captured stderr call ---------------------- ----------------------- Captured stderr call ----------------------
text going to stderr text going to stderr
==================== 2 failed in 0.02 seconds ===================== ==================== 2 failed in 0.02 seconds =====================
These options can also be customized through a configuration file: These options can also be customized through ``pytest.ini`` file:
.. code-block:: ini .. code-block:: ini
@ -69,7 +55,7 @@ with::
pytest --no-print-logs pytest --no-print-logs
Or in your ``pytest.ini``: Or in the ``pytest.ini`` file:
.. code-block:: ini .. code-block:: ini
@ -85,6 +71,10 @@ Shows failed tests in the normal manner as no logs were captured::
text going to stderr text going to stderr
==================== 2 failed in 0.02 seconds ===================== ==================== 2 failed in 0.02 seconds =====================
caplog fixture
^^^^^^^^^^^^^^
Inside tests it is possible to change the log level for the captured log Inside tests it is possible to change the log level for the captured log
messages. This is supported by the ``caplog`` fixture:: messages. This is supported by the ``caplog`` fixture::
@ -92,7 +82,7 @@ messages. This is supported by the ``caplog`` fixture::
caplog.set_level(logging.INFO) caplog.set_level(logging.INFO)
pass pass
By default the level is set on the handler used to catch the log messages, By default the level is set on the root logger,
however as a convenience it is also possible to set the log level of any however as a convenience it is also possible to set the log level of any
logger:: logger::
@ -100,14 +90,16 @@ logger::
caplog.set_level(logging.CRITICAL, logger='root.baz') caplog.set_level(logging.CRITICAL, logger='root.baz')
pass pass
The log levels set are restored automatically at the end of the test.
It is also possible to use a context manager to temporarily change the log It is also possible to use a context manager to temporarily change the log
level:: level inside a ``with`` block::
def test_bar(caplog): def test_bar(caplog):
with caplog.at_level(logging.INFO): with caplog.at_level(logging.INFO):
pass pass
Again, by default the level of the handler is affected but the level of any Again, by default the level of the root logger is affected but the level of any
logger can be changed instead with:: logger can be changed instead with::
def test_bar(caplog): def test_bar(caplog):
@ -115,7 +107,7 @@ logger can be changed instead with::
pass pass
Lastly all the logs sent to the logger during the test run are made available on Lastly all the logs sent to the logger during the test run are made available on
the fixture in the form of both the LogRecord instances and the final log text. the fixture in the form of both the ``logging.LogRecord`` instances and the final log text.
This is useful for when you want to assert on the contents of a message:: This is useful for when you want to assert on the contents of a message::
def test_baz(caplog): def test_baz(caplog):
@ -146,12 +138,41 @@ You can call ``caplog.clear()`` to reset the captured log records in a test::
your_test_method() your_test_method()
assert ['Foo'] == [rec.message for rec in caplog.records] assert ['Foo'] == [rec.message for rec in caplog.records]
The ``caplop.records`` attribute contains records from the current stage only, so
inside the ``setup`` phase it contains only setup logs, same with the ``call`` and
``teardown`` phases.
To access logs from other stages, use the ``caplog.get_records(when)`` method. As an example,
if you want to make sure that tests which use a certain fixture never log any warnings, you can inspect
the records for the ``setup`` and ``call`` stages during teardown like so:
.. code-block:: python
@pytest.fixture
def window(caplog):
window = create_window()
yield window
for when in ('setup', 'call'):
messages = [x.message for x in caplog.get_records(when) if x.level == logging.WARNING]
if messages:
pytest.fail('warning messages encountered during testing: {}'.format(messages))
caplog fixture API
~~~~~~~~~~~~~~~~~~
.. autoclass:: _pytest.logging.LogCaptureFixture
:members:
.. _live_logs:
Live Logs Live Logs
^^^^^^^^^ ^^^^^^^^^
By default, pytest will output any logging records with a level higher or By setting the :confval:`log_cli` configuration option to ``true``, pytest will output
equal to WARNING. In order to actually see these logs in the console you have to logging records as they are emitted directly into the console.
disable pytest output capture by passing ``-s``.
You can specify the logging level for which log records with equal or higher You can specify the logging level for which log records with equal or higher
level are printed to the console by passing ``--log-cli-level``. This setting level are printed to the console by passing ``--log-cli-level``. This setting
@ -190,3 +211,49 @@ option names are:
* ``log_file_level`` * ``log_file_level``
* ``log_file_format`` * ``log_file_format``
* ``log_file_date_format`` * ``log_file_date_format``
.. _log_release_notes:
Release notes
^^^^^^^^^^^^^
This feature was introduced as a drop-in replacement for the `pytest-catchlog
<https://pypi.org/project/pytest-catchlog/>`_ plugin and they conflict
with each other. The backward compatibility API with ``pytest-capturelog``
has been dropped when this feature was introduced, so if for that reason you
still need ``pytest-catchlog`` you can disable the internal feature by
adding to your ``pytest.ini``:
.. code-block:: ini
[pytest]
addopts=-p no:logging
.. _log_changes_3_4:
Incompatible changes in pytest 3.4
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
This feature was introduced in ``3.3`` and some **incompatible changes** have been
made in ``3.4`` after community feedback:
* Log levels are no longer changed unless explicitly requested by the :confval:`log_level` configuration
or ``--log-level`` command-line options. This allows users to configure logger objects themselves.
* :ref:`Live Logs <live_logs>` is now disabled by default and can be enabled setting the
:confval:`log_cli` configuration option to ``true``. When enabled, the verbosity is increased so logging for each
test is visible.
* :ref:`Live Logs <live_logs>` are now sent to ``sys.stdout`` and no longer require the ``-s`` command-line option
to work.
If you want to partially restore the logging behavior of version ``3.3``, you can add this options to your ``ini``
file:
.. code-block:: ini
[pytest]
log_cli=true
log_level=NOTSET
More details about the discussion that lead to this changes can be read in
issue `#3013 <https://github.com/pytest-dev/pytest/issues/3013>`_.

View File

@ -256,6 +256,66 @@ This will add an extra property ``example_key="1"`` to the generated
Also please note that using this feature will break any schema verification. Also please note that using this feature will break any schema verification.
This might be a problem when used with some CI servers. This might be a problem when used with some CI servers.
record_xml_attribute
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. versionadded:: 3.4
To add an additional xml attribute to a testcase element, you can use
``record_xml_attribute`` fixture. This can also be used to override existing values:
.. code-block:: python
def test_function(record_xml_attribute):
record_xml_attribute("assertions", "REQ-1234")
record_xml_attribute("classname", "custom_classname")
print('hello world')
assert True
Unlike ``record_xml_property``, this will not add a new child element.
Instead, this will add an attribute ``assertions="REQ-1234"`` inside the generated
``testcase`` tag and override the default ``classname`` with ``"classname=custom_classname"``:
.. code-block:: xml
<testcase classname="custom_classname" file="test_function.py" line="0" name="test_function" time="0.003" assertions="REQ-1234">
<system-out>
hello world
</system-out>
</testcase>
.. warning::
``record_xml_attribute`` is an experimental feature, and its interface might be replaced
by something more powerful and general in future versions. The
functionality per-se will be kept, however.
Using this over ``record_xml_property`` can help when using ci tools to parse the xml report.
However, some parsers are quite strict about the elements and attributes that are allowed.
Many tools use an xsd schema (like the example below) to validate incoming xml.
Make sure you are using attribute names that are allowed by your parser.
Below is the Scheme used by Jenkins to validate the XML report:
.. code-block:: xml
<xs:element name="testcase">
<xs:complexType>
<xs:sequence>
<xs:element ref="skipped" minOccurs="0" maxOccurs="1"/>
<xs:element ref="error" minOccurs="0" maxOccurs="unbounded"/>
<xs:element ref="failure" minOccurs="0" maxOccurs="unbounded"/>
<xs:element ref="system-out" minOccurs="0" maxOccurs="unbounded"/>
<xs:element ref="system-err" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="name" type="xs:string" use="required"/>
<xs:attribute name="assertions" type="xs:string" use="optional"/>
<xs:attribute name="time" type="xs:string" use="optional"/>
<xs:attribute name="classname" type="xs:string" use="optional"/>
<xs:attribute name="status" type="xs:string" use="optional"/>
</xs:complexType>
</xs:element>
LogXML: add_global_property LogXML: add_global_property
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@ -387,6 +447,7 @@ hook was invoked::
$ python myinvoke.py $ python myinvoke.py
*** test run reporting finishing *** test run reporting finishing
.. note:: .. note::

View File

@ -609,6 +609,8 @@ All runtest related hooks receive a :py:class:`pytest.Item <_pytest.main.Item>`
.. autofunction:: pytest_runtestloop .. autofunction:: pytest_runtestloop
.. autofunction:: pytest_runtest_protocol .. autofunction:: pytest_runtest_protocol
.. autofunction:: pytest_runtest_logstart
.. autofunction:: pytest_runtest_logfinish
.. autofunction:: pytest_runtest_setup .. autofunction:: pytest_runtest_setup
.. autofunction:: pytest_runtest_call .. autofunction:: pytest_runtest_call
.. autofunction:: pytest_runtest_teardown .. autofunction:: pytest_runtest_teardown
@ -693,14 +695,14 @@ Reference of objects involved in hooks
.. autoclass:: _pytest.config.Parser() .. autoclass:: _pytest.config.Parser()
:members: :members:
.. autoclass:: _pytest.main.Node() .. autoclass:: _pytest.nodes.Node()
:members: :members:
.. autoclass:: _pytest.main.Collector() .. autoclass:: _pytest.nodes.Collector()
:members: :members:
:show-inheritance: :show-inheritance:
.. autoclass:: _pytest.main.FSCollector() .. autoclass:: _pytest.nodes.FSCollector()
:members: :members:
:show-inheritance: :show-inheritance:
@ -708,7 +710,7 @@ Reference of objects involved in hooks
:members: :members:
:show-inheritance: :show-inheritance:
.. autoclass:: _pytest.main.Item() .. autoclass:: _pytest.nodes.Item()
:members: :members:
:show-inheritance: :show-inheritance:

View File

@ -18,7 +18,8 @@ from _pytest.debugging import pytestPDB as __pytestPDB
from _pytest.recwarn import warns, deprecated_call from _pytest.recwarn import warns, deprecated_call
from _pytest.outcomes import fail, skip, importorskip, exit, xfail from _pytest.outcomes import fail, skip, importorskip, exit, xfail
from _pytest.mark import MARK_GEN as mark, param from _pytest.mark import MARK_GEN as mark, param
from _pytest.main import Item, Collector, File, Session from _pytest.main import Session
from _pytest.nodes import Item, Collector, File
from _pytest.fixtures import fillfixtures as _fillfuncargs from _pytest.fixtures import fillfixtures as _fillfuncargs
from _pytest.python import ( from _pytest.python import (
Module, Class, Instance, Function, Generator, Module, Class, Instance, Function, Generator,

View File

@ -1,5 +1,6 @@
invoke devpi-client
tox
gitpython gitpython
invoke
towncrier towncrier
tox
wheel wheel

View File

@ -536,7 +536,7 @@ class TestInvocationVariants(object):
path = testdir.mkpydir("tpkg") path = testdir.mkpydir("tpkg")
path.join("test_hello.py").write('raise ImportError') path.join("test_hello.py").write('raise ImportError')
result = testdir.runpytest_subprocess("--pyargs", "tpkg.test_hello") result = testdir.runpytest("--pyargs", "tpkg.test_hello", syspathinsert=True)
assert result.ret != 0 assert result.ret != 0
result.stdout.fnmatch_lines([ result.stdout.fnmatch_lines([
@ -554,7 +554,7 @@ class TestInvocationVariants(object):
result.stdout.fnmatch_lines([ result.stdout.fnmatch_lines([
"*2 passed*" "*2 passed*"
]) ])
result = testdir.runpytest("--pyargs", "tpkg.test_hello") result = testdir.runpytest("--pyargs", "tpkg.test_hello", syspathinsert=True)
assert result.ret == 0 assert result.ret == 0
result.stdout.fnmatch_lines([ result.stdout.fnmatch_lines([
"*1 passed*" "*1 passed*"
@ -578,7 +578,7 @@ class TestInvocationVariants(object):
]) ])
monkeypatch.setenv('PYTHONPATH', join_pythonpath(testdir)) monkeypatch.setenv('PYTHONPATH', join_pythonpath(testdir))
result = testdir.runpytest("--pyargs", "tpkg.test_missing") result = testdir.runpytest("--pyargs", "tpkg.test_missing", syspathinsert=True)
assert result.ret != 0 assert result.ret != 0
result.stderr.fnmatch_lines([ result.stderr.fnmatch_lines([
"*not*found*test_missing*", "*not*found*test_missing*",
@ -902,7 +902,7 @@ def test_deferred_hook_checking(testdir):
testdir.syspathinsert() testdir.syspathinsert()
testdir.makepyfile(**{ testdir.makepyfile(**{
'plugin.py': """ 'plugin.py': """
class Hooks: class Hooks(object):
def pytest_my_hook(self, config): def pytest_my_hook(self, config):
pass pass

View File

@ -58,7 +58,6 @@ def test_str_args_deprecated(tmpdir, testdir):
warnings.append(message) warnings.append(message)
ret = pytest.main("%s -x" % tmpdir, plugins=[Collect()]) ret = pytest.main("%s -x" % tmpdir, plugins=[Collect()])
testdir.delete_loaded_modules()
msg = ('passing a string to pytest.main() is deprecated, ' msg = ('passing a string to pytest.main() is deprecated, '
'pass a list of arguments instead.') 'pass a list of arguments instead.')
assert msg in warnings assert msg in warnings

View File

@ -1,6 +1,7 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import logging import logging
import pytest
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
sublogger = logging.getLogger(__name__ + '.baz') sublogger = logging.getLogger(__name__ + '.baz')
@ -26,6 +27,30 @@ def test_change_level(caplog):
assert 'CRITICAL' in caplog.text assert 'CRITICAL' in caplog.text
def test_change_level_undo(testdir):
"""Ensure that 'set_level' is undone after the end of the test"""
testdir.makepyfile('''
import logging
def test1(caplog):
caplog.set_level(logging.INFO)
# using + operator here so fnmatch_lines doesn't match the code in the traceback
logging.info('log from ' + 'test1')
assert 0
def test2(caplog):
# using + operator here so fnmatch_lines doesn't match the code in the traceback
logging.info('log from ' + 'test2')
assert 0
''')
result = testdir.runpytest_subprocess()
result.stdout.fnmatch_lines([
'*log from test1*',
'*2 failed in *',
])
assert 'log from test2' not in result.stdout.str()
def test_with_statement(caplog): def test_with_statement(caplog):
with caplog.at_level(logging.INFO): with caplog.at_level(logging.INFO):
logger.debug('handler DEBUG level') logger.debug('handler DEBUG level')
@ -42,6 +67,7 @@ def test_with_statement(caplog):
def test_log_access(caplog): def test_log_access(caplog):
caplog.set_level(logging.INFO)
logger.info('boo %s', 'arg') logger.info('boo %s', 'arg')
assert caplog.records[0].levelname == 'INFO' assert caplog.records[0].levelname == 'INFO'
assert caplog.records[0].msg == 'boo %s' assert caplog.records[0].msg == 'boo %s'
@ -49,6 +75,7 @@ def test_log_access(caplog):
def test_record_tuples(caplog): def test_record_tuples(caplog):
caplog.set_level(logging.INFO)
logger.info('boo %s', 'arg') logger.info('boo %s', 'arg')
assert caplog.record_tuples == [ assert caplog.record_tuples == [
@ -57,6 +84,7 @@ def test_record_tuples(caplog):
def test_unicode(caplog): def test_unicode(caplog):
caplog.set_level(logging.INFO)
logger.info(u'') logger.info(u'')
assert caplog.records[0].levelname == 'INFO' assert caplog.records[0].levelname == 'INFO'
assert caplog.records[0].msg == u'' assert caplog.records[0].msg == u''
@ -64,7 +92,29 @@ def test_unicode(caplog):
def test_clear(caplog): def test_clear(caplog):
caplog.set_level(logging.INFO)
logger.info(u'') logger.info(u'')
assert len(caplog.records) assert len(caplog.records)
caplog.clear() caplog.clear()
assert not len(caplog.records) assert not len(caplog.records)
@pytest.fixture
def logging_during_setup_and_teardown(caplog):
caplog.set_level('INFO')
logger.info('a_setup_log')
yield
logger.info('a_teardown_log')
assert [x.message for x in caplog.get_records('teardown')] == ['a_teardown_log']
def test_caplog_captures_for_all_stages(caplog, logging_during_setup_and_teardown):
assert not caplog.records
assert not caplog.get_records('call')
logger.info('a_call_log')
assert [x.message for x in caplog.get_records('call')] == ['a_call_log']
assert [x.message for x in caplog.get_records('setup')] == ['a_setup_log']
# This reachers into private API, don't use this type of thing in real tests!
assert set(caplog._item.catch_log_handlers.keys()) == {'setup', 'call'}

View File

@ -0,0 +1,29 @@
import logging
import py.io
from _pytest.logging import ColoredLevelFormatter
def test_coloredlogformatter():
logfmt = '%(filename)-25s %(lineno)4d %(levelname)-8s %(message)s'
record = logging.LogRecord(
name='dummy', level=logging.INFO, pathname='dummypath', lineno=10,
msg='Test Message', args=(), exc_info=False)
class ColorConfig(object):
class option(object):
pass
tw = py.io.TerminalWriter()
tw.hasmarkup = True
formatter = ColoredLevelFormatter(tw, logfmt)
output = formatter.format(record)
assert output == ('dummypath 10 '
'\x1b[32mINFO \x1b[0m Test Message')
tw.hasmarkup = False
formatter = ColoredLevelFormatter(tw, logfmt)
output = formatter.format(record)
assert output == ('dummypath 10 '
'INFO Test Message')

View File

@ -1,5 +1,8 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
import os import os
import six
import pytest import pytest
@ -35,7 +38,7 @@ def test_messages_logged(testdir):
logger.info('text going to logger') logger.info('text going to logger')
assert False assert False
''') ''')
result = testdir.runpytest() result = testdir.runpytest('--log-level=INFO')
assert result.ret == 1 assert result.ret == 1
result.stdout.fnmatch_lines(['*- Captured *log call -*', result.stdout.fnmatch_lines(['*- Captured *log call -*',
'*text going to logger*']) '*text going to logger*'])
@ -58,7 +61,7 @@ def test_setup_logging(testdir):
logger.info('text going to logger from call') logger.info('text going to logger from call')
assert False assert False
''') ''')
result = testdir.runpytest() result = testdir.runpytest('--log-level=INFO')
assert result.ret == 1 assert result.ret == 1
result.stdout.fnmatch_lines(['*- Captured *log setup -*', result.stdout.fnmatch_lines(['*- Captured *log setup -*',
'*text going to logger from setup*', '*text going to logger from setup*',
@ -79,7 +82,7 @@ def test_teardown_logging(testdir):
logger.info('text going to logger from teardown') logger.info('text going to logger from teardown')
assert False assert False
''') ''')
result = testdir.runpytest() result = testdir.runpytest('--log-level=INFO')
assert result.ret == 1 assert result.ret == 1
result.stdout.fnmatch_lines(['*- Captured *log call -*', result.stdout.fnmatch_lines(['*- Captured *log call -*',
'*text going to logger from call*', '*text going to logger from call*',
@ -141,6 +144,30 @@ def test_disable_log_capturing_ini(testdir):
result.stdout.fnmatch_lines(['*- Captured *log call -*']) result.stdout.fnmatch_lines(['*- Captured *log call -*'])
@pytest.mark.parametrize('enabled', [True, False])
def test_log_cli_enabled_disabled(testdir, enabled):
msg = 'critical message logged by test'
testdir.makepyfile('''
import logging
def test_log_cli():
logging.critical("{}")
'''.format(msg))
if enabled:
testdir.makeini('''
[pytest]
log_cli=true
''')
result = testdir.runpytest()
if enabled:
result.stdout.fnmatch_lines([
'test_log_cli_enabled_disabled.py::test_log_cli ',
'test_log_cli_enabled_disabled.py* CRITICAL critical message logged by test',
'PASSED*',
])
else:
assert msg not in result.stdout.str()
def test_log_cli_default_level(testdir): def test_log_cli_default_level(testdir):
# Default log file level # Default log file level
testdir.makepyfile(''' testdir.makepyfile('''
@ -148,32 +175,103 @@ def test_log_cli_default_level(testdir):
import logging import logging
def test_log_cli(request): def test_log_cli(request):
plugin = request.config.pluginmanager.getplugin('logging-plugin') plugin = request.config.pluginmanager.getplugin('logging-plugin')
assert plugin.log_cli_handler.level == logging.WARNING assert plugin.log_cli_handler.level == logging.NOTSET
logging.getLogger('catchlog').info("This log message won't be shown") logging.getLogger('catchlog').info("INFO message won't be shown")
logging.getLogger('catchlog').warning("This log message will be shown") logging.getLogger('catchlog').warning("WARNING message will be shown")
print('PASSED') ''')
testdir.makeini('''
[pytest]
log_cli=true
''') ''')
result = testdir.runpytest('-s') result = testdir.runpytest()
# fnmatch_lines does an assertion internally # fnmatch_lines does an assertion internally
result.stdout.fnmatch_lines([ result.stdout.fnmatch_lines([
'test_log_cli_default_level.py PASSED', 'test_log_cli_default_level.py::test_log_cli ',
'test_log_cli_default_level.py*WARNING message will be shown*',
]) ])
result.stderr.fnmatch_lines([ assert "INFO message won't be shown" not in result.stdout.str()
"* This log message will be shown"
])
for line in result.errlines:
try:
assert "This log message won't be shown" in line
pytest.fail("A log message was shown and it shouldn't have been")
except AssertionError:
continue
# make sure that that we get a '0' exit code for the testsuite # make sure that that we get a '0' exit code for the testsuite
assert result.ret == 0 assert result.ret == 0
def test_log_cli_default_level_multiple_tests(testdir, request):
"""Ensure we reset the first newline added by the live logger between tests"""
filename = request.node.name + '.py'
testdir.makepyfile('''
import logging
def test_log_1():
logging.warning("log message from test_log_1")
def test_log_2():
logging.warning("log message from test_log_2")
''')
testdir.makeini('''
[pytest]
log_cli=true
''')
result = testdir.runpytest()
result.stdout.fnmatch_lines([
'{}::test_log_1 '.format(filename),
'*WARNING*log message from test_log_1*',
'PASSED *50%*',
'{}::test_log_2 '.format(filename),
'*WARNING*log message from test_log_2*',
'PASSED *100%*',
'=* 2 passed in *=',
])
def test_log_cli_default_level_sections(testdir, request):
"""Check that with live logging enable we are printing the correct headers during setup/call/teardown."""
filename = request.node.name + '.py'
testdir.makepyfile('''
import pytest
import logging
@pytest.fixture
def fix(request):
logging.warning("log message from setup of {}".format(request.node.name))
yield
logging.warning("log message from teardown of {}".format(request.node.name))
def test_log_1(fix):
logging.warning("log message from test_log_1")
def test_log_2(fix):
logging.warning("log message from test_log_2")
''')
testdir.makeini('''
[pytest]
log_cli=true
''')
result = testdir.runpytest()
result.stdout.fnmatch_lines([
'{}::test_log_1 '.format(filename),
'*-- live log setup --*',
'*WARNING*log message from setup of test_log_1*',
'*-- live log call --*',
'*WARNING*log message from test_log_1*',
'PASSED *50%*',
'*-- live log teardown --*',
'*WARNING*log message from teardown of test_log_1*',
'{}::test_log_2 '.format(filename),
'*-- live log setup --*',
'*WARNING*log message from setup of test_log_2*',
'*-- live log call --*',
'*WARNING*log message from test_log_2*',
'PASSED *100%*',
'*-- live log teardown --*',
'*WARNING*log message from teardown of test_log_2*',
'=* 2 passed in *=',
])
def test_log_cli_level(testdir): def test_log_cli_level(testdir):
# Default log file level # Default log file level
testdir.makepyfile(''' testdir.makepyfile('''
@ -186,22 +284,19 @@ def test_log_cli_level(testdir):
logging.getLogger('catchlog').info("This log message will be shown") logging.getLogger('catchlog').info("This log message will be shown")
print('PASSED') print('PASSED')
''') ''')
testdir.makeini('''
[pytest]
log_cli=true
''')
result = testdir.runpytest('-s', '--log-cli-level=INFO') result = testdir.runpytest('-s', '--log-cli-level=INFO')
# fnmatch_lines does an assertion internally # fnmatch_lines does an assertion internally
result.stdout.fnmatch_lines([ result.stdout.fnmatch_lines([
'test_log_cli_level.py PASSED', 'test_log_cli_level.py*This log message will be shown',
'PASSED', # 'PASSED' on its own line because the log message prints a new line
]) ])
result.stderr.fnmatch_lines([ assert "This log message won't be shown" not in result.stdout.str()
"* This log message will be shown"
])
for line in result.errlines:
try:
assert "This log message won't be shown" in line
pytest.fail("A log message was shown and it shouldn't have been")
except AssertionError:
continue
# make sure that that we get a '0' exit code for the testsuite # make sure that that we get a '0' exit code for the testsuite
assert result.ret == 0 assert result.ret == 0
@ -210,17 +305,10 @@ def test_log_cli_level(testdir):
# fnmatch_lines does an assertion internally # fnmatch_lines does an assertion internally
result.stdout.fnmatch_lines([ result.stdout.fnmatch_lines([
'test_log_cli_level.py PASSED', 'test_log_cli_level.py* This log message will be shown',
'PASSED', # 'PASSED' on its own line because the log message prints a new line
]) ])
result.stderr.fnmatch_lines([ assert "This log message won't be shown" not in result.stdout.str()
"* This log message will be shown"
])
for line in result.errlines:
try:
assert "This log message won't be shown" in line
pytest.fail("A log message was shown and it shouldn't have been")
except AssertionError:
continue
# make sure that that we get a '0' exit code for the testsuite # make sure that that we get a '0' exit code for the testsuite
assert result.ret == 0 assert result.ret == 0
@ -230,6 +318,7 @@ def test_log_cli_ini_level(testdir):
testdir.makeini( testdir.makeini(
""" """
[pytest] [pytest]
log_cli=true
log_cli_level = INFO log_cli_level = INFO
""") """)
testdir.makepyfile(''' testdir.makepyfile('''
@ -247,17 +336,10 @@ def test_log_cli_ini_level(testdir):
# fnmatch_lines does an assertion internally # fnmatch_lines does an assertion internally
result.stdout.fnmatch_lines([ result.stdout.fnmatch_lines([
'test_log_cli_ini_level.py PASSED', 'test_log_cli_ini_level.py* This log message will be shown',
'PASSED', # 'PASSED' on its own line because the log message prints a new line
]) ])
result.stderr.fnmatch_lines([ assert "This log message won't be shown" not in result.stdout.str()
"* This log message will be shown"
])
for line in result.errlines:
try:
assert "This log message won't be shown" in line
pytest.fail("A log message was shown and it shouldn't have been")
except AssertionError:
continue
# make sure that that we get a '0' exit code for the testsuite # make sure that that we get a '0' exit code for the testsuite
assert result.ret == 0 assert result.ret == 0
@ -278,7 +360,7 @@ def test_log_file_cli(testdir):
log_file = testdir.tmpdir.join('pytest.log').strpath log_file = testdir.tmpdir.join('pytest.log').strpath
result = testdir.runpytest('-s', '--log-file={0}'.format(log_file)) result = testdir.runpytest('-s', '--log-file={0}'.format(log_file), '--log-file-level=WARNING')
# fnmatch_lines does an assertion internally # fnmatch_lines does an assertion internally
result.stdout.fnmatch_lines([ result.stdout.fnmatch_lines([
@ -327,6 +409,16 @@ def test_log_file_cli_level(testdir):
assert "This log message won't be shown" not in contents assert "This log message won't be shown" not in contents
def test_log_level_not_changed_by_default(testdir):
testdir.makepyfile('''
import logging
def test_log_file():
assert logging.getLogger().level == logging.WARNING
''')
result = testdir.runpytest('-s')
result.stdout.fnmatch_lines('* 1 passed in *')
def test_log_file_ini(testdir): def test_log_file_ini(testdir):
log_file = testdir.tmpdir.join('pytest.log').strpath log_file = testdir.tmpdir.join('pytest.log').strpath
@ -334,6 +426,7 @@ def test_log_file_ini(testdir):
""" """
[pytest] [pytest]
log_file={0} log_file={0}
log_file_level=WARNING
""".format(log_file)) """.format(log_file))
testdir.makepyfile(''' testdir.makepyfile('''
import pytest import pytest
@ -396,3 +489,53 @@ def test_log_file_ini_level(testdir):
contents = rfh.read() contents = rfh.read()
assert "This log message will be shown" in contents assert "This log message will be shown" in contents
assert "This log message won't be shown" not in contents assert "This log message won't be shown" not in contents
@pytest.mark.parametrize('has_capture_manager', [True, False])
def test_live_logging_suspends_capture(has_capture_manager, request):
"""Test that capture manager is suspended when we emitting messages for live logging.
This tests the implementation calls instead of behavior because it is difficult/impossible to do it using
``testdir`` facilities because they do their own capturing.
We parametrize the test to also make sure _LiveLoggingStreamHandler works correctly if no capture manager plugin
is installed.
"""
import logging
from functools import partial
from _pytest.capture import CaptureManager
from _pytest.logging import _LiveLoggingStreamHandler
class MockCaptureManager:
calls = []
def suspend_global_capture(self):
self.calls.append('suspend_global_capture')
def resume_global_capture(self):
self.calls.append('resume_global_capture')
# sanity check
assert CaptureManager.suspend_capture_item
assert CaptureManager.resume_global_capture
class DummyTerminal(six.StringIO):
def section(self, *args, **kwargs):
pass
out_file = DummyTerminal()
capture_manager = MockCaptureManager() if has_capture_manager else None
handler = _LiveLoggingStreamHandler(out_file, capture_manager)
handler.set_when('call')
logger = logging.getLogger(__name__ + '.test_live_logging_suspends_capture')
logger.addHandler(handler)
request.addfinalizer(partial(logger.removeHandler, handler))
logger.critical('some message')
if has_capture_manager:
assert MockCaptureManager.calls == ['suspend_global_capture', 'resume_global_capture']
else:
assert MockCaptureManager.calls == []
assert out_file.getvalue() == '\nsome message\n'

View File

@ -5,11 +5,8 @@ from textwrap import dedent
import _pytest._code import _pytest._code
import pytest import pytest
from _pytest.main import ( from _pytest.main import EXIT_NOTESTSCOLLECTED
Collector, from _pytest.nodes import Collector
EXIT_NOTESTSCOLLECTED
)
ignore_parametrized_marks = pytest.mark.filterwarnings('ignore:Applying marks directly to parameters') ignore_parametrized_marks = pytest.mark.filterwarnings('ignore:Applying marks directly to parameters')
@ -882,10 +879,10 @@ class TestConftestCustomization(object):
import sys, os, imp import sys, os, imp
from _pytest.python import Module from _pytest.python import Module
class Loader: class Loader(object):
def load_module(self, name): def load_module(self, name):
return imp.load_source(name, name + ".narf") return imp.load_source(name, name + ".narf")
class Finder: class Finder(object):
def find_module(self, name, path=None): def find_module(self, name, path=None):
if os.path.exists(name + ".narf"): if os.path.exists(name + ".narf"):
return Loader() return Loader()

View File

@ -2828,7 +2828,7 @@ class TestShowFixtures(object):
def test_show_fixtures_indented_in_class(self, testdir): def test_show_fixtures_indented_in_class(self, testdir):
p = testdir.makepyfile(dedent(''' p = testdir.makepyfile(dedent('''
import pytest import pytest
class TestClass: class TestClass(object):
@pytest.fixture @pytest.fixture
def fixture1(self): def fixture1(self):
"""line1 """line1

View File

@ -14,7 +14,7 @@ PY3 = sys.version_info >= (3, 0)
class TestMetafunc(object): class TestMetafunc(object):
def Metafunc(self, func): def Metafunc(self, func, config=None):
# the unit tests of this class check if things work correctly # the unit tests of this class check if things work correctly
# on the funcarg level, so we don't need a full blown # on the funcarg level, so we don't need a full blown
# initiliazation # initiliazation
@ -26,7 +26,7 @@ class TestMetafunc(object):
names = fixtures.getfuncargnames(func) names = fixtures.getfuncargnames(func)
fixtureinfo = FixtureInfo(names) fixtureinfo = FixtureInfo(names)
return python.Metafunc(func, fixtureinfo, None) return python.Metafunc(func, fixtureinfo, config)
def test_no_funcargs(self, testdir): def test_no_funcargs(self, testdir):
def function(): def function():
@ -156,7 +156,19 @@ class TestMetafunc(object):
def test_parametrize_empty_list(self): def test_parametrize_empty_list(self):
def func(y): def func(y):
pass pass
metafunc = self.Metafunc(func)
class MockConfig(object):
def getini(self, name):
return ''
@property
def hook(self):
return self
def pytest_make_parametrize_id(self, **kw):
pass
metafunc = self.Metafunc(func, MockConfig())
metafunc.parametrize("y", []) metafunc.parametrize("y", [])
assert 'skip' == metafunc._calls[0].marks[0].name assert 'skip' == metafunc._calls[0].marks[0].name
@ -235,6 +247,25 @@ class TestMetafunc(object):
for val, expected in values: for val, expected in values:
assert _idval(val, 'a', 6, None) == expected assert _idval(val, 'a', 6, None) == expected
def test_class_or_function_idval(self):
"""unittest for the expected behavior to obtain ids for parametrized
values that are classes or functions: their __name__.
"""
from _pytest.python import _idval
class TestClass(object):
pass
def test_function():
pass
values = [
(TestClass, "TestClass"),
(test_function, "test_function"),
]
for val, expected in values:
assert _idval(val, 'a', 6, None) == expected
@pytest.mark.issue250 @pytest.mark.issue250
def test_idmaker_autoname(self): def test_idmaker_autoname(self):
from _pytest.python import idmaker from _pytest.python import idmaker

18
testing/test_cache.py → testing/test_cacheprovider.py Executable file → Normal file
View File

@ -31,7 +31,7 @@ class TestNewAPI(object):
def test_cache_writefail_cachfile_silent(self, testdir): def test_cache_writefail_cachfile_silent(self, testdir):
testdir.makeini("[pytest]") testdir.makeini("[pytest]")
testdir.tmpdir.join('.cache').write('gone wrong') testdir.tmpdir.join('.pytest_cache').write('gone wrong')
config = testdir.parseconfigure() config = testdir.parseconfigure()
cache = config.cache cache = config.cache
cache.set('test/broken', []) cache.set('test/broken', [])
@ -39,14 +39,14 @@ class TestNewAPI(object):
@pytest.mark.skipif(sys.platform.startswith('win'), reason='no chmod on windows') @pytest.mark.skipif(sys.platform.startswith('win'), reason='no chmod on windows')
def test_cache_writefail_permissions(self, testdir): def test_cache_writefail_permissions(self, testdir):
testdir.makeini("[pytest]") testdir.makeini("[pytest]")
testdir.tmpdir.ensure_dir('.cache').chmod(0) testdir.tmpdir.ensure_dir('.pytest_cache').chmod(0)
config = testdir.parseconfigure() config = testdir.parseconfigure()
cache = config.cache cache = config.cache
cache.set('test/broken', []) cache.set('test/broken', [])
@pytest.mark.skipif(sys.platform.startswith('win'), reason='no chmod on windows') @pytest.mark.skipif(sys.platform.startswith('win'), reason='no chmod on windows')
def test_cache_failure_warns(self, testdir): def test_cache_failure_warns(self, testdir):
testdir.tmpdir.ensure_dir('.cache').chmod(0) testdir.tmpdir.ensure_dir('.pytest_cache').chmod(0)
testdir.makepyfile(""" testdir.makepyfile("""
def test_error(): def test_error():
raise Exception raise Exception
@ -127,7 +127,7 @@ def test_cache_reportheader(testdir):
""") """)
result = testdir.runpytest("-v") result = testdir.runpytest("-v")
result.stdout.fnmatch_lines([ result.stdout.fnmatch_lines([
"cachedir: .cache" "cachedir: .pytest_cache"
]) ])
@ -201,8 +201,8 @@ class TestLastFailed(object):
]) ])
# Run this again to make sure clear-cache is robust # Run this again to make sure clear-cache is robust
if os.path.isdir('.cache'): if os.path.isdir('.pytest_cache'):
shutil.rmtree('.cache') shutil.rmtree('.pytest_cache')
result = testdir.runpytest("--lf", "--cache-clear") result = testdir.runpytest("--lf", "--cache-clear")
result.stdout.fnmatch_lines([ result.stdout.fnmatch_lines([
"*1 failed*2 passed*", "*1 failed*2 passed*",
@ -495,15 +495,15 @@ class TestLastFailed(object):
# Issue #1342 # Issue #1342
testdir.makepyfile(test_empty='') testdir.makepyfile(test_empty='')
testdir.runpytest('-q', '--lf') testdir.runpytest('-q', '--lf')
assert not os.path.exists('.cache') assert not os.path.exists('.pytest_cache')
testdir.makepyfile(test_successful='def test_success():\n assert True') testdir.makepyfile(test_successful='def test_success():\n assert True')
testdir.runpytest('-q', '--lf') testdir.runpytest('-q', '--lf')
assert not os.path.exists('.cache') assert not os.path.exists('.pytest_cache')
testdir.makepyfile(test_errored='def test_error():\n assert False') testdir.makepyfile(test_errored='def test_error():\n assert False')
testdir.runpytest('-q', '--lf') testdir.runpytest('-q', '--lf')
assert os.path.exists('.cache') assert os.path.exists('.pytest_cache')
def test_xfail_not_considered_failure(self, testdir): def test_xfail_not_considered_failure(self, testdir):
testdir.makepyfile(''' testdir.makepyfile('''

View File

@ -1245,7 +1245,7 @@ def test_py36_windowsconsoleio_workaround_non_standard_streams():
""" """
from _pytest.capture import _py36_windowsconsoleio_workaround from _pytest.capture import _py36_windowsconsoleio_workaround
class DummyStream: class DummyStream(object):
def write(self, s): def write(self, s):
pass pass

View File

@ -781,16 +781,18 @@ class TestOverrideIniArgs(object):
testdir.makeini(""" testdir.makeini("""
[pytest] [pytest]
custom_option_1=custom_option_1 custom_option_1=custom_option_1
custom_option_2=custom_option_2""") custom_option_2=custom_option_2
""")
testdir.makepyfile(""" testdir.makepyfile("""
def test_multiple_options(pytestconfig): def test_multiple_options(pytestconfig):
prefix = "custom_option" prefix = "custom_option"
for x in range(1, 5): for x in range(1, 5):
ini_value=pytestconfig.getini("%s_%d" % (prefix, x)) ini_value=pytestconfig.getini("%s_%d" % (prefix, x))
print('\\nini%d:%s' % (x, ini_value))""") print('\\nini%d:%s' % (x, ini_value))
""")
result = testdir.runpytest( result = testdir.runpytest(
"--override-ini", 'custom_option_1=fulldir=/tmp/user1', "--override-ini", 'custom_option_1=fulldir=/tmp/user1',
'custom_option_2=url=/tmp/user2?a=b&d=e', '-o', 'custom_option_2=url=/tmp/user2?a=b&d=e',
"-o", 'custom_option_3=True', "-o", 'custom_option_3=True',
"-o", 'custom_option_4=no', "-s") "-o", 'custom_option_4=no', "-s")
result.stdout.fnmatch_lines(["ini1:fulldir=/tmp/user1", result.stdout.fnmatch_lines(["ini1:fulldir=/tmp/user1",
@ -853,10 +855,42 @@ class TestOverrideIniArgs(object):
assert rootdir == tmpdir assert rootdir == tmpdir
assert inifile is None assert inifile is None
def test_addopts_before_initini(self, testdir, tmpdir, monkeypatch): def test_addopts_before_initini(self, monkeypatch):
cache_dir = '.custom_cache' cache_dir = '.custom_cache'
monkeypatch.setenv('PYTEST_ADDOPTS', '-o cache_dir=%s' % cache_dir) monkeypatch.setenv('PYTEST_ADDOPTS', '-o cache_dir=%s' % cache_dir)
from _pytest.config import get_config from _pytest.config import get_config
config = get_config() config = get_config()
config._preparse([], addopts=True) config._preparse([], addopts=True)
assert config._override_ini == [['cache_dir=%s' % cache_dir]] assert config._override_ini == ['cache_dir=%s' % cache_dir]
def test_override_ini_does_not_contain_paths(self):
"""Check that -o no longer swallows all options after it (#3103)"""
from _pytest.config import get_config
config = get_config()
config._preparse(['-o', 'cache_dir=/cache', '/some/test/path'])
assert config._override_ini == ['cache_dir=/cache']
def test_multiple_override_ini_options(self, testdir, request):
"""Ensure a file path following a '-o' option does not generate an error (#3103)"""
testdir.makepyfile(**{
"conftest.py": """
def pytest_addoption(parser):
parser.addini('foo', default=None, help='some option')
parser.addini('bar', default=None, help='some option')
""",
"test_foo.py": """
def test(pytestconfig):
assert pytestconfig.getini('foo') == '1'
assert pytestconfig.getini('bar') == '0'
""",
"test_bar.py": """
def test():
assert False
""",
})
result = testdir.runpytest('-o', 'foo=1', '-o', 'bar=0', 'test_foo.py')
assert 'ERROR:' not in result.stderr.str()
result.stdout.fnmatch_lines([
'collected 1 item',
'*= 1 passed in *=',
])

View File

@ -879,6 +879,27 @@ def test_record_property_same_name(testdir):
pnodes[1].assert_attr(name="foo", value="baz") pnodes[1].assert_attr(name="foo", value="baz")
def test_record_attribute(testdir):
testdir.makepyfile("""
import pytest
@pytest.fixture
def other(record_xml_attribute):
record_xml_attribute("bar", 1)
def test_record(record_xml_attribute, other):
record_xml_attribute("foo", "<1");
""")
result, dom = runandparse(testdir, '-rw')
node = dom.find_first_by_tag("testsuite")
tnode = node.find_first_by_tag("testcase")
tnode.assert_attr(bar="1")
tnode.assert_attr(foo="<1")
result.stdout.fnmatch_lines([
'test_record_attribute.py::test_record',
'*record_xml_attribute*experimental*',
])
def test_random_report_log_xdist(testdir): def test_random_report_log_xdist(testdir):
"""xdist calls pytest_runtest_logreport as they are executed by the slaves, """xdist calls pytest_runtest_logreport as they are executed by the slaves,
with nodes from several nodes overlapping, so junitxml must cope with that with nodes from several nodes overlapping, so junitxml must cope with that

View File

@ -3,7 +3,10 @@ import os
import sys import sys
import pytest import pytest
from _pytest.mark import MarkGenerator as Mark, ParameterSet, transfer_markers from _pytest.mark import (
MarkGenerator as Mark, ParameterSet, transfer_markers,
EMPTY_PARAMETERSET_OPTION,
)
class TestMark(object): class TestMark(object):
@ -344,6 +347,21 @@ def test_keyword_option_parametrize(spec, testdir):
assert list(passed) == list(passed_result) assert list(passed) == list(passed_result)
@pytest.mark.parametrize("spec", [
("foo or import", "ERROR: Python keyword 'import' not accepted in expressions passed to '-k'"),
("foo or", "ERROR: Wrong expression passed to '-k': foo or")
])
def test_keyword_option_wrong_arguments(spec, testdir, capsys):
testdir.makepyfile("""
def test_func(arg):
pass
""")
opt, expected_result = spec
testdir.inline_run("-k", opt)
out = capsys.readouterr().err
assert expected_result in out
def test_parametrized_collected_from_command_line(testdir): def test_parametrized_collected_from_command_line(testdir):
"""Parametrized test not collected if test named specified """Parametrized test not collected if test named specified
in command line issue#649. in command line issue#649.
@ -876,3 +894,27 @@ class TestMarkDecorator(object):
]) ])
def test__eq__(self, lhs, rhs, expected): def test__eq__(self, lhs, rhs, expected):
assert (lhs == rhs) == expected assert (lhs == rhs) == expected
@pytest.mark.parametrize('mark', [None, '', 'skip', 'xfail'])
def test_parameterset_for_parametrize_marks(testdir, mark):
if mark is not None:
testdir.makeini(
"[pytest]\n{}={}".format(EMPTY_PARAMETERSET_OPTION, mark))
config = testdir.parseconfig()
from _pytest.mark import pytest_configure, get_empty_parameterset_mark
pytest_configure(config)
result_mark = get_empty_parameterset_mark(config, ['a'], all)
if mark in (None, ''):
# normalize to the requested name
mark = 'skip'
assert result_mark.name == mark
assert result_mark.kwargs['reason'].startswith("got empty parameter set ")
if mark == 'xfail':
assert result_mark.kwargs.get('run') is False
def test_parameterset_for_parametrize_bad_markname(testdir):
with pytest.raises(pytest.UsageError):
test_parameterset_for_parametrize_marks(testdir, 'bad')

View File

@ -1,8 +1,12 @@
# -*- coding: utf-8 -*- # -*- coding: utf-8 -*-
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import pytest
import os import os
import py.path
import pytest
import sys
import _pytest.pytester as pytester
from _pytest.pytester import HookRecorder from _pytest.pytester import HookRecorder
from _pytest.pytester import CwdSnapshot, SysModulesSnapshot, SysPathsSnapshot
from _pytest.config import PytestPluginManager from _pytest.config import PytestPluginManager
from _pytest.main import EXIT_OK, EXIT_TESTSFAILED from _pytest.main import EXIT_OK, EXIT_TESTSFAILED
@ -131,14 +135,116 @@ def test_makepyfile_utf8(testdir):
assert u"mixed_encoding = u'São Paulo'".encode('utf-8') in p.read('rb') assert u"mixed_encoding = u'São Paulo'".encode('utf-8') in p.read('rb')
def test_inline_run_clean_modules(testdir): class TestInlineRunModulesCleanup(object):
test_mod = testdir.makepyfile("def test_foo(): assert True") def test_inline_run_test_module_not_cleaned_up(self, testdir):
result = testdir.inline_run(str(test_mod)) test_mod = testdir.makepyfile("def test_foo(): assert True")
assert result.ret == EXIT_OK result = testdir.inline_run(str(test_mod))
# rewrite module, now test should fail if module was re-imported assert result.ret == EXIT_OK
test_mod.write("def test_foo(): assert False") # rewrite module, now test should fail if module was re-imported
result2 = testdir.inline_run(str(test_mod)) test_mod.write("def test_foo(): assert False")
assert result2.ret == EXIT_TESTSFAILED result2 = testdir.inline_run(str(test_mod))
assert result2.ret == EXIT_TESTSFAILED
def spy_factory(self):
class SysModulesSnapshotSpy(object):
instances = []
def __init__(self, preserve=None):
SysModulesSnapshotSpy.instances.append(self)
self._spy_restore_count = 0
self._spy_preserve = preserve
self.__snapshot = SysModulesSnapshot(preserve=preserve)
def restore(self):
self._spy_restore_count += 1
return self.__snapshot.restore()
return SysModulesSnapshotSpy
def test_inline_run_taking_and_restoring_a_sys_modules_snapshot(
self, testdir, monkeypatch):
spy_factory = self.spy_factory()
monkeypatch.setattr(pytester, "SysModulesSnapshot", spy_factory)
original = dict(sys.modules)
testdir.syspathinsert()
testdir.makepyfile(import1="# you son of a silly person")
testdir.makepyfile(import2="# my hovercraft is full of eels")
test_mod = testdir.makepyfile("""
import import1
def test_foo(): import import2""")
testdir.inline_run(str(test_mod))
assert len(spy_factory.instances) == 1
spy = spy_factory.instances[0]
assert spy._spy_restore_count == 1
assert sys.modules == original
assert all(sys.modules[x] is original[x] for x in sys.modules)
def test_inline_run_sys_modules_snapshot_restore_preserving_modules(
self, testdir, monkeypatch):
spy_factory = self.spy_factory()
monkeypatch.setattr(pytester, "SysModulesSnapshot", spy_factory)
test_mod = testdir.makepyfile("def test_foo(): pass")
testdir.inline_run(str(test_mod))
spy = spy_factory.instances[0]
assert not spy._spy_preserve("black_knight")
assert spy._spy_preserve("zope")
assert spy._spy_preserve("zope.interface")
assert spy._spy_preserve("zopelicious")
def test_external_test_module_imports_not_cleaned_up(self, testdir):
testdir.syspathinsert()
testdir.makepyfile(imported="data = 'you son of a silly person'")
import imported
test_mod = testdir.makepyfile("""
def test_foo():
import imported
imported.data = 42""")
testdir.inline_run(str(test_mod))
assert imported.data == 42
def test_inline_run_clean_sys_paths(testdir):
def test_sys_path_change_cleanup(self, testdir):
test_path1 = testdir.tmpdir.join("boink1").strpath
test_path2 = testdir.tmpdir.join("boink2").strpath
test_path3 = testdir.tmpdir.join("boink3").strpath
sys.path.append(test_path1)
sys.meta_path.append(test_path1)
original_path = list(sys.path)
original_meta_path = list(sys.meta_path)
test_mod = testdir.makepyfile("""
import sys
sys.path.append({:test_path2})
sys.meta_path.append({:test_path2})
def test_foo():
sys.path.append({:test_path3})
sys.meta_path.append({:test_path3})""".format(locals()))
testdir.inline_run(str(test_mod))
assert sys.path == original_path
assert sys.meta_path == original_meta_path
def spy_factory(self):
class SysPathsSnapshotSpy(object):
instances = []
def __init__(self):
SysPathsSnapshotSpy.instances.append(self)
self._spy_restore_count = 0
self.__snapshot = SysPathsSnapshot()
def restore(self):
self._spy_restore_count += 1
return self.__snapshot.restore()
return SysPathsSnapshotSpy
def test_inline_run_taking_and_restoring_a_sys_paths_snapshot(
self, testdir, monkeypatch):
spy_factory = self.spy_factory()
monkeypatch.setattr(pytester, "SysPathsSnapshot", spy_factory)
test_mod = testdir.makepyfile("def test_foo(): pass")
testdir.inline_run(str(test_mod))
assert len(spy_factory.instances) == 1
spy = spy_factory.instances[0]
assert spy._spy_restore_count == 1
def test_assert_outcomes_after_pytest_error(testdir): def test_assert_outcomes_after_pytest_error(testdir):
@ -147,3 +253,126 @@ def test_assert_outcomes_after_pytest_error(testdir):
result = testdir.runpytest('--unexpected-argument') result = testdir.runpytest('--unexpected-argument')
with pytest.raises(ValueError, message="Pytest terminal report not found"): with pytest.raises(ValueError, message="Pytest terminal report not found"):
result.assert_outcomes(passed=0) result.assert_outcomes(passed=0)
def test_cwd_snapshot(tmpdir):
foo = tmpdir.ensure('foo', dir=1)
bar = tmpdir.ensure('bar', dir=1)
foo.chdir()
snapshot = CwdSnapshot()
bar.chdir()
assert py.path.local() == bar
snapshot.restore()
assert py.path.local() == foo
class TestSysModulesSnapshot(object):
key = 'my-test-module'
def test_remove_added(self):
original = dict(sys.modules)
assert self.key not in sys.modules
snapshot = SysModulesSnapshot()
sys.modules[self.key] = 'something'
assert self.key in sys.modules
snapshot.restore()
assert sys.modules == original
def test_add_removed(self, monkeypatch):
assert self.key not in sys.modules
monkeypatch.setitem(sys.modules, self.key, 'something')
assert self.key in sys.modules
original = dict(sys.modules)
snapshot = SysModulesSnapshot()
del sys.modules[self.key]
assert self.key not in sys.modules
snapshot.restore()
assert sys.modules == original
def test_restore_reloaded(self, monkeypatch):
assert self.key not in sys.modules
monkeypatch.setitem(sys.modules, self.key, 'something')
assert self.key in sys.modules
original = dict(sys.modules)
snapshot = SysModulesSnapshot()
sys.modules[self.key] = 'something else'
snapshot.restore()
assert sys.modules == original
def test_preserve_modules(self, monkeypatch):
key = [self.key + str(i) for i in range(3)]
assert not any(k in sys.modules for k in key)
for i, k in enumerate(key):
monkeypatch.setitem(sys.modules, k, 'something' + str(i))
original = dict(sys.modules)
def preserve(name):
return name in (key[0], key[1], 'some-other-key')
snapshot = SysModulesSnapshot(preserve=preserve)
sys.modules[key[0]] = original[key[0]] = 'something else0'
sys.modules[key[1]] = original[key[1]] = 'something else1'
sys.modules[key[2]] = 'something else2'
snapshot.restore()
assert sys.modules == original
def test_preserve_container(self, monkeypatch):
original = dict(sys.modules)
assert self.key not in original
replacement = dict(sys.modules)
replacement[self.key] = 'life of brian'
snapshot = SysModulesSnapshot()
monkeypatch.setattr(sys, 'modules', replacement)
snapshot.restore()
assert sys.modules is replacement
assert sys.modules == original
@pytest.mark.parametrize('path_type', ('path', 'meta_path'))
class TestSysPathsSnapshot(object):
other_path = {
'path': 'meta_path',
'meta_path': 'path'}
@staticmethod
def path(n):
return 'my-dirty-little-secret-' + str(n)
def test_restore(self, monkeypatch, path_type):
other_path_type = self.other_path[path_type]
for i in range(10):
assert self.path(i) not in getattr(sys, path_type)
sys_path = [self.path(i) for i in range(6)]
monkeypatch.setattr(sys, path_type, sys_path)
original = list(sys_path)
original_other = list(getattr(sys, other_path_type))
snapshot = SysPathsSnapshot()
transformation = {
'source': (0, 1, 2, 3, 4, 5),
'target': ( 6, 2, 9, 7, 5, 8)} # noqa: E201
assert sys_path == [self.path(x) for x in transformation['source']]
sys_path[1] = self.path(6)
sys_path[3] = self.path(7)
sys_path.append(self.path(8))
del sys_path[4]
sys_path[3:3] = [self.path(9)]
del sys_path[0]
assert sys_path == [self.path(x) for x in transformation['target']]
snapshot.restore()
assert getattr(sys, path_type) is sys_path
assert getattr(sys, path_type) == original
assert getattr(sys, other_path_type) == original_other
def test_preserve_container(self, monkeypatch, path_type):
other_path_type = self.other_path[path_type]
original_data = list(getattr(sys, path_type))
original_other = getattr(sys, other_path_type)
original_other_data = list(original_other)
new = []
snapshot = SysPathsSnapshot()
monkeypatch.setattr(sys, path_type, new)
snapshot.restore()
assert getattr(sys, path_type) is new
assert getattr(sys, path_type) == original_data
assert getattr(sys, other_path_type) is original_other
assert getattr(sys, other_path_type) == original_other_data

View File

@ -4,7 +4,7 @@ import os
import _pytest._code import _pytest._code
import py import py
import pytest import pytest
from _pytest.main import Node, Item, FSCollector from _pytest.nodes import Node, Item, FSCollector
from _pytest.resultlog import generic_path, ResultLog, \ from _pytest.resultlog import generic_path, ResultLog, \
pytest_configure, pytest_unconfigure pytest_configure, pytest_unconfigure

View File

@ -204,6 +204,18 @@ class BaseFunctionalTests(object):
""") """)
assert rec.ret == 1 assert rec.ret == 1
def test_logstart_logfinish_hooks(self, testdir):
rec = testdir.inline_runsource("""
import pytest
def test_func():
pass
""")
reps = rec.getcalls("pytest_runtest_logstart pytest_runtest_logfinish")
assert [x._name for x in reps] == ['pytest_runtest_logstart', 'pytest_runtest_logfinish']
for rep in reps:
assert rep.nodeid == 'test_logstart_logfinish_hooks.py::test_func'
assert rep.location == ('test_logstart_logfinish_hooks.py', 1, 'test_func')
def test_exact_teardown_issue90(self, testdir): def test_exact_teardown_issue90(self, testdir):
rec = testdir.inline_runsource(""" rec = testdir.inline_runsource("""
import pytest import pytest

View File

@ -966,10 +966,10 @@ def test_no_trailing_whitespace_after_inifile_word(testdir):
assert 'inifile: tox.ini\n' in result.stdout.str() assert 'inifile: tox.ini\n' in result.stdout.str()
class TestProgress: class TestProgress(object):
@pytest.fixture @pytest.fixture
def many_tests_file(self, testdir): def many_tests_files(self, testdir):
testdir.makepyfile( testdir.makepyfile(
test_bar=""" test_bar="""
import pytest import pytest
@ -1006,7 +1006,7 @@ class TestProgress:
'=* 2 passed in *=', '=* 2 passed in *=',
]) ])
def test_normal(self, many_tests_file, testdir): def test_normal(self, many_tests_files, testdir):
output = testdir.runpytest() output = testdir.runpytest()
output.stdout.re_match_lines([ output.stdout.re_match_lines([
r'test_bar.py \.{10} \s+ \[ 50%\]', r'test_bar.py \.{10} \s+ \[ 50%\]',
@ -1014,7 +1014,7 @@ class TestProgress:
r'test_foobar.py \.{5} \s+ \[100%\]', r'test_foobar.py \.{5} \s+ \[100%\]',
]) ])
def test_verbose(self, many_tests_file, testdir): def test_verbose(self, many_tests_files, testdir):
output = testdir.runpytest('-v') output = testdir.runpytest('-v')
output.stdout.re_match_lines([ output.stdout.re_match_lines([
r'test_bar.py::test_bar\[0\] PASSED \s+ \[ 5%\]', r'test_bar.py::test_bar\[0\] PASSED \s+ \[ 5%\]',
@ -1022,14 +1022,14 @@ class TestProgress:
r'test_foobar.py::test_foobar\[4\] PASSED \s+ \[100%\]', r'test_foobar.py::test_foobar\[4\] PASSED \s+ \[100%\]',
]) ])
def test_xdist_normal(self, many_tests_file, testdir): def test_xdist_normal(self, many_tests_files, testdir):
pytest.importorskip('xdist') pytest.importorskip('xdist')
output = testdir.runpytest('-n2') output = testdir.runpytest('-n2')
output.stdout.re_match_lines([ output.stdout.re_match_lines([
r'\.{20} \s+ \[100%\]', r'\.{20} \s+ \[100%\]',
]) ])
def test_xdist_verbose(self, many_tests_file, testdir): def test_xdist_verbose(self, many_tests_files, testdir):
pytest.importorskip('xdist') pytest.importorskip('xdist')
output = testdir.runpytest('-n2', '-v') output = testdir.runpytest('-n2', '-v')
output.stdout.re_match_lines_random([ output.stdout.re_match_lines_random([
@ -1037,3 +1037,86 @@ class TestProgress:
r'\[gw\d\] \[\s*\d+%\] PASSED test_foo.py::test_foo\[1\]', r'\[gw\d\] \[\s*\d+%\] PASSED test_foo.py::test_foo\[1\]',
r'\[gw\d\] \[\s*\d+%\] PASSED test_foobar.py::test_foobar\[1\]', r'\[gw\d\] \[\s*\d+%\] PASSED test_foobar.py::test_foobar\[1\]',
]) ])
def test_capture_no(self, many_tests_files, testdir):
output = testdir.runpytest('-s')
output.stdout.re_match_lines([
r'test_bar.py \.{10}',
r'test_foo.py \.{5}',
r'test_foobar.py \.{5}',
])
class TestProgressWithTeardown(object):
"""Ensure we show the correct percentages for tests that fail during teardown (#3088)"""
@pytest.fixture
def contest_with_teardown_fixture(self, testdir):
testdir.makeconftest('''
import pytest
@pytest.fixture
def fail_teardown():
yield
assert False
''')
@pytest.fixture
def many_files(self, testdir, contest_with_teardown_fixture):
testdir.makepyfile(
test_bar='''
import pytest
@pytest.mark.parametrize('i', range(5))
def test_bar(fail_teardown, i):
pass
''',
test_foo='''
import pytest
@pytest.mark.parametrize('i', range(15))
def test_foo(fail_teardown, i):
pass
''',
)
def test_teardown_simple(self, testdir, contest_with_teardown_fixture):
testdir.makepyfile('''
def test_foo(fail_teardown):
pass
''')
output = testdir.runpytest()
output.stdout.re_match_lines([
r'test_teardown_simple.py \.E\s+\[100%\]',
])
def test_teardown_with_test_also_failing(self, testdir, contest_with_teardown_fixture):
testdir.makepyfile('''
def test_foo(fail_teardown):
assert False
''')
output = testdir.runpytest()
output.stdout.re_match_lines([
r'test_teardown_with_test_also_failing.py FE\s+\[100%\]',
])
def test_teardown_many(self, testdir, many_files):
output = testdir.runpytest()
output.stdout.re_match_lines([
r'test_bar.py (\.E){5}\s+\[ 25%\]',
r'test_foo.py (\.E){15}\s+\[100%\]',
])
def test_teardown_many_verbose(self, testdir, many_files):
output = testdir.runpytest('-v')
output.stdout.re_match_lines([
r'test_bar.py::test_bar\[0\] PASSED\s+\[ 5%\]',
r'test_bar.py::test_bar\[0\] ERROR\s+\[ 5%\]',
r'test_bar.py::test_bar\[4\] PASSED\s+\[ 25%\]',
r'test_bar.py::test_bar\[4\] ERROR\s+\[ 25%\]',
])
def test_xdist_normal(self, many_files, testdir):
pytest.importorskip('xdist')
output = testdir.runpytest('-n2')
output.stdout.re_match_lines([
r'[\.E]{40} \s+ \[100%\]',
])

View File

@ -129,6 +129,7 @@ basepython = python
changedir = doc/en changedir = doc/en
deps = deps =
sphinx sphinx
attrs
PyYAML PyYAML
commands = commands =