Merge remote-tracking branch 'upstream/features'
This commit is contained in:
commit
ed118d7f20
5
AUTHORS
5
AUTHORS
|
@ -35,6 +35,7 @@ Brianna Laugher
|
|||
Bruno Oliveira
|
||||
Cal Leeming
|
||||
Carl Friedrich Bolz
|
||||
Carlos Jenkins
|
||||
Ceridwen
|
||||
Charles Cloud
|
||||
Charnjit SiNGH (CCSJ)
|
||||
|
@ -99,6 +100,7 @@ Jon Sonesen
|
|||
Jonas Obrist
|
||||
Jordan Guymon
|
||||
Jordan Moldow
|
||||
Jordan Speicher
|
||||
Joshua Bronson
|
||||
Jurko Gospodnetić
|
||||
Justyna Janczyszyn
|
||||
|
@ -146,11 +148,13 @@ Ned Batchelder
|
|||
Neven Mundar
|
||||
Nicolas Delaby
|
||||
Oleg Pidsadnyi
|
||||
Oleg Sushchenko
|
||||
Oliver Bestwalter
|
||||
Omar Kohl
|
||||
Omer Hadari
|
||||
Patrick Hayes
|
||||
Paweł Adamczak
|
||||
Pedro Algarvio
|
||||
Pieter Mulder
|
||||
Piotr Banaszkiewicz
|
||||
Punyashloka Biswal
|
||||
|
@ -194,6 +198,7 @@ Victor Uriarte
|
|||
Vidar T. Fauske
|
||||
Vitaly Lashmanov
|
||||
Vlad Dragos
|
||||
William Lee
|
||||
Wouter van Ackooy
|
||||
Xuan Luong
|
||||
Xuecong Liao
|
||||
|
|
152
CHANGELOG.rst
152
CHANGELOG.rst
|
@ -8,6 +8,158 @@
|
|||
|
||||
.. towncrier release notes start
|
||||
|
||||
Pytest 3.5.0 (2018-03-21)
|
||||
=========================
|
||||
|
||||
Deprecations and Removals
|
||||
-------------------------
|
||||
|
||||
- ``record_xml_property`` fixture is now deprecated in favor of the more
|
||||
generic ``record_property``. (`#2770
|
||||
<https://github.com/pytest-dev/pytest/issues/2770>`_)
|
||||
|
||||
- Defining ``pytest_plugins`` is now deprecated in non-top-level conftest.py
|
||||
files, because they "leak" to the entire directory tree. (`#3084
|
||||
<https://github.com/pytest-dev/pytest/issues/3084>`_)
|
||||
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- New ``--show-capture`` command-line option that allows to specify how to
|
||||
display captured output when tests fail: ``no``, ``stdout``, ``stderr``,
|
||||
``log`` or ``all`` (the default). (`#1478
|
||||
<https://github.com/pytest-dev/pytest/issues/1478>`_)
|
||||
|
||||
- New ``--rootdir`` command-line option to override the rules for discovering
|
||||
the root directory. See `customize
|
||||
<https://docs.pytest.org/en/latest/customize.html>`_ in the documentation for
|
||||
details. (`#1642 <https://github.com/pytest-dev/pytest/issues/1642>`_)
|
||||
|
||||
- Fixtures are now instantiated based on their scopes, with higher-scoped
|
||||
fixtures (such as ``session``) being instantiated first than lower-scoped
|
||||
fixtures (such as ``function``). The relative order of fixtures of the same
|
||||
scope is kept unchanged, based in their declaration order and their
|
||||
dependencies. (`#2405 <https://github.com/pytest-dev/pytest/issues/2405>`_)
|
||||
|
||||
- ``record_xml_property`` renamed to ``record_property`` and is now compatible
|
||||
with xdist, markers and any reporter. ``record_xml_property`` name is now
|
||||
deprecated. (`#2770 <https://github.com/pytest-dev/pytest/issues/2770>`_)
|
||||
|
||||
- New ``--nf``, ``--new-first`` options: run new tests first followed by the
|
||||
rest of the tests, in both cases tests are also sorted by the file modified
|
||||
time, with more recent files coming first. (`#3034
|
||||
<https://github.com/pytest-dev/pytest/issues/3034>`_)
|
||||
|
||||
- New ``--last-failed-no-failures`` command-line option that allows to specify
|
||||
the behavior of the cache plugin's ```--last-failed`` feature when no tests
|
||||
failed in the last run (or no cache was found): ``none`` or ``all`` (the
|
||||
default). (`#3139 <https://github.com/pytest-dev/pytest/issues/3139>`_)
|
||||
|
||||
- New ``--doctest-continue-on-failure`` command-line option to enable doctests
|
||||
to show multiple failures for each snippet, instead of stopping at the first
|
||||
failure. (`#3149 <https://github.com/pytest-dev/pytest/issues/3149>`_)
|
||||
|
||||
- Captured log messages are added to the ``<system-out>`` tag in the generated
|
||||
junit xml file if the ``junit_logging`` ini option is set to ``system-out``.
|
||||
If the value of this ini option is ``system-err`, the logs are written to
|
||||
``<system-err>``. The default value for ``junit_logging`` is ``no``, meaning
|
||||
captured logs are not written to the output file. (`#3156
|
||||
<https://github.com/pytest-dev/pytest/issues/3156>`_)
|
||||
|
||||
- Allow the logging plugin to handle ``pytest_runtest_logstart`` and
|
||||
``pytest_runtest_logfinish`` hooks when live logs are enabled. (`#3189
|
||||
<https://github.com/pytest-dev/pytest/issues/3189>`_)
|
||||
|
||||
- Passing `--log-cli-level` in the command-line now automatically activates
|
||||
live logging. (`#3190 <https://github.com/pytest-dev/pytest/issues/3190>`_)
|
||||
|
||||
- Add command line option ``--deselect`` to allow deselection of individual
|
||||
tests at collection time. (`#3198
|
||||
<https://github.com/pytest-dev/pytest/issues/3198>`_)
|
||||
|
||||
- Captured logs are printed before entering pdb. (`#3204
|
||||
<https://github.com/pytest-dev/pytest/issues/3204>`_)
|
||||
|
||||
- Deselected item count is now shown before tests are run, e.g. ``collected X
|
||||
items / Y deselected``. (`#3213
|
||||
<https://github.com/pytest-dev/pytest/issues/3213>`_)
|
||||
|
||||
- The builtin module ``platform`` is now available for use in expressions in
|
||||
``pytest.mark``. (`#3236
|
||||
<https://github.com/pytest-dev/pytest/issues/3236>`_)
|
||||
|
||||
- The *short test summary info* section now is displayed after tracebacks and
|
||||
warnings in the terminal. (`#3255
|
||||
<https://github.com/pytest-dev/pytest/issues/3255>`_)
|
||||
|
||||
- New ``--verbosity`` flag to set verbosity level explicitly. (`#3296
|
||||
<https://github.com/pytest-dev/pytest/issues/3296>`_)
|
||||
|
||||
- ``pytest.approx`` now accepts comparing a numpy array with a scalar. (`#3312
|
||||
<https://github.com/pytest-dev/pytest/issues/3312>`_)
|
||||
|
||||
|
||||
Bug Fixes
|
||||
---------
|
||||
|
||||
- Suppress ``IOError`` when closing the temporary file used for capturing
|
||||
streams in Python 2.7. (`#2370
|
||||
<https://github.com/pytest-dev/pytest/issues/2370>`_)
|
||||
|
||||
- Fixed ``clear()`` method on ``caplog`` fixture which cleared ``records``, but
|
||||
not the ``text`` property. (`#3297
|
||||
<https://github.com/pytest-dev/pytest/issues/3297>`_)
|
||||
|
||||
- During test collection, when stdin is not allowed to be read, the
|
||||
``DontReadFromStdin`` object still allow itself to be iterable and resolved
|
||||
to an iterator without crashing. (`#3314
|
||||
<https://github.com/pytest-dev/pytest/issues/3314>`_)
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Added a `reference <https://docs.pytest.org/en/latest/reference.html>`_ page
|
||||
to the docs. (`#1713 <https://github.com/pytest-dev/pytest/issues/1713>`_)
|
||||
|
||||
|
||||
Trivial/Internal Changes
|
||||
------------------------
|
||||
|
||||
- Change minimum requirement of ``attrs`` to ``17.4.0``. (`#3228
|
||||
<https://github.com/pytest-dev/pytest/issues/3228>`_)
|
||||
|
||||
- Renamed example directories so all tests pass when ran from the base
|
||||
directory. (`#3245 <https://github.com/pytest-dev/pytest/issues/3245>`_)
|
||||
|
||||
- Internal ``mark.py`` module has been turned into a package. (`#3250
|
||||
<https://github.com/pytest-dev/pytest/issues/3250>`_)
|
||||
|
||||
- ``pytest`` now depends on the `more_itertools
|
||||
<https://github.com/erikrose/more-itertools>`_ package. (`#3265
|
||||
<https://github.com/pytest-dev/pytest/issues/3265>`_)
|
||||
|
||||
- Added warning when ``[pytest]`` section is used in a ``.cfg`` file passed
|
||||
with ``-c`` (`#3268 <https://github.com/pytest-dev/pytest/issues/3268>`_)
|
||||
|
||||
- ``nodeids`` can now be passed explicitly to ``FSCollector`` and ``Node``
|
||||
constructors. (`#3291 <https://github.com/pytest-dev/pytest/issues/3291>`_)
|
||||
|
||||
- Internal refactoring of ``FormattedExcinfo`` to use ``attrs`` facilities and
|
||||
remove old support code for legacy Python versions. (`#3292
|
||||
<https://github.com/pytest-dev/pytest/issues/3292>`_)
|
||||
|
||||
- Refactoring to unify how verbosity is handled internally. (`#3296
|
||||
<https://github.com/pytest-dev/pytest/issues/3296>`_)
|
||||
|
||||
- Internal refactoring to better integrate with argparse. (`#3304
|
||||
<https://github.com/pytest-dev/pytest/issues/3304>`_)
|
||||
|
||||
- Fix a python example when calling a fixture in doc/en/usage.rst (`#3308
|
||||
<https://github.com/pytest-dev/pytest/issues/3308>`_)
|
||||
|
||||
|
||||
Pytest 3.4.2 (2018-03-04)
|
||||
=========================
|
||||
|
||||
|
|
|
@ -3,6 +3,8 @@ import inspect
|
|||
import sys
|
||||
import traceback
|
||||
from inspect import CO_VARARGS, CO_VARKEYWORDS
|
||||
|
||||
import attr
|
||||
import re
|
||||
from weakref import ref
|
||||
from _pytest.compat import _PY2, _PY3, PY35, safe_str
|
||||
|
@ -458,19 +460,19 @@ class ExceptionInfo(object):
|
|||
return True
|
||||
|
||||
|
||||
@attr.s
|
||||
class FormattedExcinfo(object):
|
||||
""" presenting information about failing Functions and Generators. """
|
||||
# for traceback entries
|
||||
flow_marker = ">"
|
||||
fail_marker = "E"
|
||||
|
||||
def __init__(self, showlocals=False, style="long", abspath=True, tbfilter=True, funcargs=False):
|
||||
self.showlocals = showlocals
|
||||
self.style = style
|
||||
self.tbfilter = tbfilter
|
||||
self.funcargs = funcargs
|
||||
self.abspath = abspath
|
||||
self.astcache = {}
|
||||
showlocals = attr.ib(default=False)
|
||||
style = attr.ib(default="long")
|
||||
abspath = attr.ib(default=True)
|
||||
tbfilter = attr.ib(default=True)
|
||||
funcargs = attr.ib(default=False)
|
||||
astcache = attr.ib(default=attr.Factory(dict), init=False, repr=False)
|
||||
|
||||
def _getindent(self, source):
|
||||
# figure out indent for given source
|
||||
|
|
|
@ -26,7 +26,7 @@ class Source(object):
|
|||
for part in parts:
|
||||
if not part:
|
||||
partlines = []
|
||||
if isinstance(part, Source):
|
||||
elif isinstance(part, Source):
|
||||
partlines = part.lines
|
||||
elif isinstance(part, (tuple, list)):
|
||||
partlines = [x.rstrip("\n") for x in part]
|
||||
|
@ -98,14 +98,14 @@ class Source(object):
|
|||
newsource.lines = [(indent + line) for line in self.lines]
|
||||
return newsource
|
||||
|
||||
def getstatement(self, lineno, assertion=False):
|
||||
def getstatement(self, lineno):
|
||||
""" return Source statement which contains the
|
||||
given linenumber (counted from 0).
|
||||
"""
|
||||
start, end = self.getstatementrange(lineno, assertion)
|
||||
start, end = self.getstatementrange(lineno)
|
||||
return self[start:end]
|
||||
|
||||
def getstatementrange(self, lineno, assertion=False):
|
||||
def getstatementrange(self, lineno):
|
||||
""" return (start, end) tuple which spans the minimal
|
||||
statement region which containing the given lineno.
|
||||
"""
|
||||
|
@ -131,13 +131,7 @@ class Source(object):
|
|||
""" return True if source is parseable, heuristically
|
||||
deindenting it by default.
|
||||
"""
|
||||
try:
|
||||
import parser
|
||||
except ImportError:
|
||||
def syntax_checker(x):
|
||||
return compile(x, 'asd', 'exec')
|
||||
else:
|
||||
syntax_checker = parser.suite
|
||||
from parser import suite as syntax_checker
|
||||
|
||||
if deindent:
|
||||
source = str(self.deindent())
|
||||
|
@ -219,9 +213,9 @@ def getfslineno(obj):
|
|||
""" Return source location (path, lineno) for the given object.
|
||||
If the source cannot be determined return ("", -1)
|
||||
"""
|
||||
import _pytest._code
|
||||
from .code import Code
|
||||
try:
|
||||
code = _pytest._code.Code(obj)
|
||||
code = Code(obj)
|
||||
except TypeError:
|
||||
try:
|
||||
fn = inspect.getsourcefile(obj) or inspect.getfile(obj)
|
||||
|
@ -259,8 +253,8 @@ def findsource(obj):
|
|||
|
||||
|
||||
def getsource(obj, **kwargs):
|
||||
import _pytest._code
|
||||
obj = _pytest._code.getrawcode(obj)
|
||||
from .code import getrawcode
|
||||
obj = getrawcode(obj)
|
||||
try:
|
||||
strsrc = inspect.getsource(obj)
|
||||
except IndentationError:
|
||||
|
@ -286,8 +280,6 @@ def deindent(lines, offset=None):
|
|||
def readline_generator(lines):
|
||||
for line in lines:
|
||||
yield line + '\n'
|
||||
while True:
|
||||
yield ''
|
||||
|
||||
it = readline_generator(lines)
|
||||
|
||||
|
@ -318,9 +310,9 @@ def get_statement_startend2(lineno, node):
|
|||
# AST's line numbers start indexing at 1
|
||||
values = []
|
||||
for x in ast.walk(node):
|
||||
if isinstance(x, ast.stmt) or isinstance(x, ast.ExceptHandler):
|
||||
if isinstance(x, (ast.stmt, ast.ExceptHandler)):
|
||||
values.append(x.lineno - 1)
|
||||
for name in "finalbody", "orelse":
|
||||
for name in ("finalbody", "orelse"):
|
||||
val = getattr(x, name, None)
|
||||
if val:
|
||||
# treat the finally/orelse part as its own statement
|
||||
|
@ -338,11 +330,8 @@ def get_statement_startend2(lineno, node):
|
|||
def getstatementrange_ast(lineno, source, assertion=False, astnode=None):
|
||||
if astnode is None:
|
||||
content = str(source)
|
||||
try:
|
||||
astnode = compile(content, "source", "exec", 1024) # 1024 for AST
|
||||
except ValueError:
|
||||
start, end = getstatementrange_old(lineno, source, assertion)
|
||||
return None, start, end
|
||||
astnode = compile(content, "source", "exec", 1024) # 1024 for AST
|
||||
|
||||
start, end = get_statement_startend2(lineno, astnode)
|
||||
# we need to correct the end:
|
||||
# - ast-parsing strips comments
|
||||
|
@ -374,38 +363,3 @@ def getstatementrange_ast(lineno, source, assertion=False, astnode=None):
|
|||
else:
|
||||
break
|
||||
return astnode, start, end
|
||||
|
||||
|
||||
def getstatementrange_old(lineno, source, assertion=False):
|
||||
""" return (start, end) tuple which spans the minimal
|
||||
statement region which containing the given lineno.
|
||||
raise an IndexError if no such statementrange can be found.
|
||||
"""
|
||||
# XXX this logic is only used on python2.4 and below
|
||||
# 1. find the start of the statement
|
||||
from codeop import compile_command
|
||||
for start in range(lineno, -1, -1):
|
||||
if assertion:
|
||||
line = source.lines[start]
|
||||
# the following lines are not fully tested, change with care
|
||||
if 'super' in line and 'self' in line and '__init__' in line:
|
||||
raise IndexError("likely a subclass")
|
||||
if "assert" not in line and "raise" not in line:
|
||||
continue
|
||||
trylines = source.lines[start:lineno + 1]
|
||||
# quick hack to prepare parsing an indented line with
|
||||
# compile_command() (which errors on "return" outside defs)
|
||||
trylines.insert(0, 'def xxx():')
|
||||
trysource = '\n '.join(trylines)
|
||||
# ^ space here
|
||||
try:
|
||||
compile_command(trysource)
|
||||
except (SyntaxError, OverflowError, ValueError):
|
||||
continue
|
||||
|
||||
# 2. find the end of the statement
|
||||
for end in range(lineno + 1, len(source) + 1):
|
||||
trysource = source[start:end]
|
||||
if trysource.isparseable():
|
||||
return start, end
|
||||
raise SyntaxError("no valid source range around line %d " % (lineno,))
|
||||
|
|
|
@ -5,7 +5,12 @@ the name cache was not chosen to ensure pluggy automatically
|
|||
ignores the external pytest-cache
|
||||
"""
|
||||
from __future__ import absolute_import, division, print_function
|
||||
|
||||
from collections import OrderedDict
|
||||
|
||||
import py
|
||||
import six
|
||||
|
||||
import pytest
|
||||
import json
|
||||
import os
|
||||
|
@ -107,11 +112,12 @@ class LFPlugin(object):
|
|||
self.active = any(config.getoption(key) for key in active_keys)
|
||||
self.lastfailed = config.cache.get("cache/lastfailed", {})
|
||||
self._previously_failed_count = None
|
||||
self._no_failures_behavior = self.config.getoption('last_failed_no_failures')
|
||||
|
||||
def pytest_report_collectionfinish(self):
|
||||
if self.active:
|
||||
if not self._previously_failed_count:
|
||||
mode = "run all (no recorded failures)"
|
||||
mode = "run {} (no recorded failures)".format(self._no_failures_behavior)
|
||||
else:
|
||||
noun = 'failure' if self._previously_failed_count == 1 else 'failures'
|
||||
suffix = " first" if self.config.getoption(
|
||||
|
@ -139,24 +145,28 @@ class LFPlugin(object):
|
|||
self.lastfailed[report.nodeid] = True
|
||||
|
||||
def pytest_collection_modifyitems(self, session, config, items):
|
||||
if self.active and self.lastfailed:
|
||||
previously_failed = []
|
||||
previously_passed = []
|
||||
for item in items:
|
||||
if item.nodeid in self.lastfailed:
|
||||
previously_failed.append(item)
|
||||
if self.active:
|
||||
if self.lastfailed:
|
||||
previously_failed = []
|
||||
previously_passed = []
|
||||
for item in items:
|
||||
if item.nodeid in self.lastfailed:
|
||||
previously_failed.append(item)
|
||||
else:
|
||||
previously_passed.append(item)
|
||||
self._previously_failed_count = len(previously_failed)
|
||||
if not previously_failed:
|
||||
# running a subset of all tests with recorded failures outside
|
||||
# of the set of tests currently executing
|
||||
return
|
||||
if self.config.getoption("lf"):
|
||||
items[:] = previously_failed
|
||||
config.hook.pytest_deselected(items=previously_passed)
|
||||
else:
|
||||
previously_passed.append(item)
|
||||
self._previously_failed_count = len(previously_failed)
|
||||
if not previously_failed:
|
||||
# running a subset of all tests with recorded failures outside
|
||||
# of the set of tests currently executing
|
||||
return
|
||||
if self.config.getoption("lf"):
|
||||
items[:] = previously_failed
|
||||
config.hook.pytest_deselected(items=previously_passed)
|
||||
else:
|
||||
items[:] = previously_failed + previously_passed
|
||||
items[:] = previously_failed + previously_passed
|
||||
elif self._no_failures_behavior == 'none':
|
||||
config.hook.pytest_deselected(items=items)
|
||||
items[:] = []
|
||||
|
||||
def pytest_sessionfinish(self, session):
|
||||
config = self.config
|
||||
|
@ -168,6 +178,39 @@ class LFPlugin(object):
|
|||
config.cache.set("cache/lastfailed", self.lastfailed)
|
||||
|
||||
|
||||
class NFPlugin(object):
|
||||
""" Plugin which implements the --nf (run new-first) option """
|
||||
|
||||
def __init__(self, config):
|
||||
self.config = config
|
||||
self.active = config.option.newfirst
|
||||
self.cached_nodeids = config.cache.get("cache/nodeids", [])
|
||||
|
||||
def pytest_collection_modifyitems(self, session, config, items):
|
||||
if self.active:
|
||||
new_items = OrderedDict()
|
||||
other_items = OrderedDict()
|
||||
for item in items:
|
||||
if item.nodeid not in self.cached_nodeids:
|
||||
new_items[item.nodeid] = item
|
||||
else:
|
||||
other_items[item.nodeid] = item
|
||||
|
||||
items[:] = self._get_increasing_order(six.itervalues(new_items)) + \
|
||||
self._get_increasing_order(six.itervalues(other_items))
|
||||
self.cached_nodeids = [x.nodeid for x in items if isinstance(x, pytest.Item)]
|
||||
|
||||
def _get_increasing_order(self, items):
|
||||
return sorted(items, key=lambda item: item.fspath.mtime(), reverse=True)
|
||||
|
||||
def pytest_sessionfinish(self, session):
|
||||
config = self.config
|
||||
if config.getoption("cacheshow") or hasattr(config, "slaveinput"):
|
||||
return
|
||||
|
||||
config.cache.set("cache/nodeids", self.cached_nodeids)
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
group = parser.getgroup("general")
|
||||
group.addoption(
|
||||
|
@ -179,6 +222,10 @@ def pytest_addoption(parser):
|
|||
help="run all tests but run the last failures first. "
|
||||
"This may re-order tests and thus lead to "
|
||||
"repeated fixture setup/teardown")
|
||||
group.addoption(
|
||||
'--nf', '--new-first', action='store_true', dest="newfirst",
|
||||
help="run tests from new files first, then the rest of the tests "
|
||||
"sorted by file mtime")
|
||||
group.addoption(
|
||||
'--cache-show', action='store_true', dest="cacheshow",
|
||||
help="show cache contents, don't perform collection or tests")
|
||||
|
@ -188,6 +235,12 @@ def pytest_addoption(parser):
|
|||
parser.addini(
|
||||
"cache_dir", default='.pytest_cache',
|
||||
help="cache directory path.")
|
||||
group.addoption(
|
||||
'--lfnf', '--last-failed-no-failures', action='store',
|
||||
dest='last_failed_no_failures', choices=('all', 'none'), default='all',
|
||||
help='change the behavior when no test failed in the last run or no '
|
||||
'information about the last failures was found in the cache'
|
||||
)
|
||||
|
||||
|
||||
def pytest_cmdline_main(config):
|
||||
|
@ -200,6 +253,7 @@ def pytest_cmdline_main(config):
|
|||
def pytest_configure(config):
|
||||
config.cache = Cache(config)
|
||||
config.pluginmanager.register(LFPlugin(config), "lfplugin")
|
||||
config.pluginmanager.register(NFPlugin(config), "nfplugin")
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
|
|
|
@ -5,12 +5,14 @@ import shlex
|
|||
import traceback
|
||||
import types
|
||||
import warnings
|
||||
|
||||
import copy
|
||||
import six
|
||||
import py
|
||||
# DON't import pytest here because it causes import cycle troubles
|
||||
import sys
|
||||
import os
|
||||
from _pytest.outcomes import Skipped
|
||||
|
||||
import _pytest._code
|
||||
import _pytest.hookspec # the extension point definitions
|
||||
import _pytest.assertion
|
||||
|
@ -52,7 +54,7 @@ def main(args=None, plugins=None):
|
|||
tw = py.io.TerminalWriter(sys.stderr)
|
||||
for line in traceback.format_exception(*e.excinfo):
|
||||
tw.line(line.rstrip(), red=True)
|
||||
tw.line("ERROR: could not load %s\n" % (e.path), red=True)
|
||||
tw.line("ERROR: could not load %s\n" % (e.path,), red=True)
|
||||
return 4
|
||||
else:
|
||||
try:
|
||||
|
@ -66,7 +68,7 @@ def main(args=None, plugins=None):
|
|||
return 4
|
||||
|
||||
|
||||
class cmdline(object): # compatibility namespace
|
||||
class cmdline(object): # NOQA compatibility namespace
|
||||
main = staticmethod(main)
|
||||
|
||||
|
||||
|
@ -199,6 +201,8 @@ class PytestPluginManager(PluginManager):
|
|||
|
||||
# Config._consider_importhook will set a real object if required.
|
||||
self.rewrite_hook = _pytest.assertion.DummyRewriteHook()
|
||||
# Used to know when we are importing conftests after the pytest_configure stage
|
||||
self._configured = False
|
||||
|
||||
def addhooks(self, module_or_class):
|
||||
"""
|
||||
|
@ -274,6 +278,7 @@ class PytestPluginManager(PluginManager):
|
|||
config.addinivalue_line("markers",
|
||||
"trylast: mark a hook implementation function such that the "
|
||||
"plugin machinery will try to call it last/as late as possible.")
|
||||
self._configured = True
|
||||
|
||||
def _warn(self, message):
|
||||
kwargs = message if isinstance(message, dict) else {
|
||||
|
@ -364,6 +369,9 @@ class PytestPluginManager(PluginManager):
|
|||
_ensure_removed_sysmodule(conftestpath.purebasename)
|
||||
try:
|
||||
mod = conftestpath.pyimport()
|
||||
if hasattr(mod, 'pytest_plugins') and self._configured:
|
||||
from _pytest.deprecated import PYTEST_PLUGINS_FROM_NON_TOP_LEVEL_CONFTEST
|
||||
warnings.warn(PYTEST_PLUGINS_FROM_NON_TOP_LEVEL_CONFTEST)
|
||||
except Exception:
|
||||
raise ConftestImportFailure(conftestpath, sys.exc_info())
|
||||
|
||||
|
@ -435,10 +443,7 @@ class PytestPluginManager(PluginManager):
|
|||
|
||||
six.reraise(new_exc_type, new_exc, sys.exc_info()[2])
|
||||
|
||||
except Exception as e:
|
||||
import pytest
|
||||
if not hasattr(pytest, 'skip') or not isinstance(e, pytest.skip.Exception):
|
||||
raise
|
||||
except Skipped as e:
|
||||
self._warn("skipped plugin %r: %s" % ((modname, e.msg)))
|
||||
else:
|
||||
mod = sys.modules[importspec]
|
||||
|
@ -846,19 +851,6 @@ def _ensure_removed_sysmodule(modname):
|
|||
pass
|
||||
|
||||
|
||||
class CmdOptions(object):
|
||||
""" holds cmdline options as attributes."""
|
||||
|
||||
def __init__(self, values=()):
|
||||
self.__dict__.update(values)
|
||||
|
||||
def __repr__(self):
|
||||
return "<CmdOptions %r>" % (self.__dict__,)
|
||||
|
||||
def copy(self):
|
||||
return CmdOptions(self.__dict__)
|
||||
|
||||
|
||||
class Notset(object):
|
||||
def __repr__(self):
|
||||
return "<NOTSET>"
|
||||
|
@ -886,7 +878,7 @@ class Config(object):
|
|||
def __init__(self, pluginmanager):
|
||||
#: access to command line option as attributes.
|
||||
#: (deprecated), use :py:func:`getoption() <_pytest.config.Config.getoption>` instead
|
||||
self.option = CmdOptions()
|
||||
self.option = argparse.Namespace()
|
||||
_a = FILE_OR_DIR
|
||||
self._parser = Parser(
|
||||
usage="%%(prog)s [options] [%s] [%s] [...]" % (_a, _a),
|
||||
|
@ -990,8 +982,9 @@ class Config(object):
|
|||
self.pluginmanager._set_initial_conftests(early_config.known_args_namespace)
|
||||
|
||||
def _initini(self, args):
|
||||
ns, unknown_args = self._parser.parse_known_and_unknown_args(args, namespace=self.option.copy())
|
||||
r = determine_setup(ns.inifilename, ns.file_or_dir + unknown_args, warnfunc=self.warn)
|
||||
ns, unknown_args = self._parser.parse_known_and_unknown_args(args, namespace=copy.copy(self.option))
|
||||
r = determine_setup(ns.inifilename, ns.file_or_dir + unknown_args, warnfunc=self.warn,
|
||||
rootdir_cmd_arg=ns.rootdir or None)
|
||||
self.rootdir, self.inifile, self.inicfg = r
|
||||
self._parser.extra_info['rootdir'] = self.rootdir
|
||||
self._parser.extra_info['inifile'] = self.inifile
|
||||
|
@ -1016,7 +1009,7 @@ class Config(object):
|
|||
mode = 'plain'
|
||||
else:
|
||||
self._mark_plugins_for_rewrite(hook)
|
||||
self._warn_about_missing_assertion(mode)
|
||||
_warn_about_missing_assertion(mode)
|
||||
|
||||
def _mark_plugins_for_rewrite(self, hook):
|
||||
"""
|
||||
|
@ -1043,23 +1036,6 @@ class Config(object):
|
|||
for name in _iter_rewritable_modules(package_files):
|
||||
hook.mark_rewrite(name)
|
||||
|
||||
def _warn_about_missing_assertion(self, mode):
|
||||
try:
|
||||
assert False
|
||||
except AssertionError:
|
||||
pass
|
||||
else:
|
||||
if mode == 'plain':
|
||||
sys.stderr.write("WARNING: ASSERTIONS ARE NOT EXECUTED"
|
||||
" and FAILING TESTS WILL PASS. Are you"
|
||||
" using python -O?")
|
||||
else:
|
||||
sys.stderr.write("WARNING: assertions not in test modules or"
|
||||
" plugins will be ignored"
|
||||
" because assert statements are not executed "
|
||||
"by the underlying Python interpreter "
|
||||
"(are you using python -O?)\n")
|
||||
|
||||
def _preparse(self, args, addopts=True):
|
||||
if addopts:
|
||||
args[:] = shlex.split(os.environ.get('PYTEST_ADDOPTS', '')) + args
|
||||
|
@ -1071,7 +1047,8 @@ class Config(object):
|
|||
self.pluginmanager.consider_preparse(args)
|
||||
self.pluginmanager.load_setuptools_entrypoints('pytest11')
|
||||
self.pluginmanager.consider_env()
|
||||
self.known_args_namespace = ns = self._parser.parse_known_args(args, namespace=self.option.copy())
|
||||
self.known_args_namespace = ns = self._parser.parse_known_args(
|
||||
args, namespace=copy.copy(self.option))
|
||||
if self.known_args_namespace.confcutdir is None and self.inifile:
|
||||
confcutdir = py.path.local(self.inifile).dirname
|
||||
self.known_args_namespace.confcutdir = confcutdir
|
||||
|
@ -1233,6 +1210,29 @@ class Config(object):
|
|||
return self.getoption(name, skip=True)
|
||||
|
||||
|
||||
def _assertion_supported():
|
||||
try:
|
||||
assert False
|
||||
except AssertionError:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
||||
|
||||
def _warn_about_missing_assertion(mode):
|
||||
if not _assertion_supported():
|
||||
if mode == 'plain':
|
||||
sys.stderr.write("WARNING: ASSERTIONS ARE NOT EXECUTED"
|
||||
" and FAILING TESTS WILL PASS. Are you"
|
||||
" using python -O?")
|
||||
else:
|
||||
sys.stderr.write("WARNING: assertions not in test modules or"
|
||||
" plugins will be ignored"
|
||||
" because assert statements are not executed "
|
||||
"by the underlying Python interpreter "
|
||||
"(are you using python -O?)\n")
|
||||
|
||||
|
||||
def exists(path, ignore=EnvironmentError):
|
||||
try:
|
||||
return path.check()
|
||||
|
@ -1250,7 +1250,7 @@ def getcfg(args, warnfunc=None):
|
|||
This parameter should be removed when pytest
|
||||
adopts standard deprecation warnings (#1804).
|
||||
"""
|
||||
from _pytest.deprecated import SETUP_CFG_PYTEST
|
||||
from _pytest.deprecated import CFG_PYTEST_SECTION
|
||||
inibasenames = ["pytest.ini", "tox.ini", "setup.cfg"]
|
||||
args = [x for x in args if not str(x).startswith("-")]
|
||||
if not args:
|
||||
|
@ -1264,7 +1264,7 @@ def getcfg(args, warnfunc=None):
|
|||
iniconfig = py.iniconfig.IniConfig(p)
|
||||
if 'pytest' in iniconfig.sections:
|
||||
if inibasename == 'setup.cfg' and warnfunc:
|
||||
warnfunc('C1', SETUP_CFG_PYTEST)
|
||||
warnfunc('C1', CFG_PYTEST_SECTION.format(filename=inibasename))
|
||||
return base, p, iniconfig['pytest']
|
||||
if inibasename == 'setup.cfg' and 'tool:pytest' in iniconfig.sections:
|
||||
return base, p, iniconfig['tool:pytest']
|
||||
|
@ -1323,15 +1323,19 @@ def get_dirs_from_args(args):
|
|||
]
|
||||
|
||||
|
||||
def determine_setup(inifile, args, warnfunc=None):
|
||||
def determine_setup(inifile, args, warnfunc=None, rootdir_cmd_arg=None):
|
||||
dirs = get_dirs_from_args(args)
|
||||
if inifile:
|
||||
iniconfig = py.iniconfig.IniConfig(inifile)
|
||||
is_cfg_file = str(inifile).endswith('.cfg')
|
||||
# TODO: [pytest] section in *.cfg files is depricated. Need refactoring.
|
||||
sections = ['tool:pytest', 'pytest'] if is_cfg_file else ['pytest']
|
||||
for section in sections:
|
||||
try:
|
||||
inicfg = iniconfig[section]
|
||||
if is_cfg_file and section == 'pytest' and warnfunc:
|
||||
from _pytest.deprecated import CFG_PYTEST_SECTION
|
||||
warnfunc('C1', CFG_PYTEST_SECTION.format(filename=str(inifile)))
|
||||
break
|
||||
except KeyError:
|
||||
inicfg = None
|
||||
|
@ -1350,6 +1354,11 @@ def determine_setup(inifile, args, warnfunc=None):
|
|||
is_fs_root = os.path.splitdrive(str(rootdir))[1] == '/'
|
||||
if is_fs_root:
|
||||
rootdir = ancestor
|
||||
if rootdir_cmd_arg:
|
||||
rootdir_abs_path = py.path.local(os.path.expandvars(rootdir_cmd_arg))
|
||||
if not os.path.isdir(str(rootdir_abs_path)):
|
||||
raise UsageError("Directory '{}' not found. Check your '--rootdir' option.".format(rootdir_abs_path))
|
||||
rootdir = rootdir_abs_path
|
||||
return rootdir, inifile, inicfg or {}
|
||||
|
||||
|
||||
|
|
|
@ -87,15 +87,16 @@ def _enter_pdb(node, excinfo, rep):
|
|||
tw = node.config.pluginmanager.getplugin("terminalreporter")._tw
|
||||
tw.line()
|
||||
|
||||
captured_stdout = rep.capstdout
|
||||
if len(captured_stdout) > 0:
|
||||
tw.sep(">", "captured stdout")
|
||||
tw.line(captured_stdout)
|
||||
showcapture = node.config.option.showcapture
|
||||
|
||||
captured_stderr = rep.capstderr
|
||||
if len(captured_stderr) > 0:
|
||||
tw.sep(">", "captured stderr")
|
||||
tw.line(captured_stderr)
|
||||
for sectionname, content in (('stdout', rep.capstdout),
|
||||
('stderr', rep.capstderr),
|
||||
('log', rep.caplog)):
|
||||
if showcapture in (sectionname, 'all') and content:
|
||||
tw.sep(">", "captured " + sectionname)
|
||||
if content[-1:] == "\n":
|
||||
content = content[:-1]
|
||||
tw.line(content)
|
||||
|
||||
tw.sep(">", "traceback")
|
||||
rep.toterminal(tw)
|
||||
|
|
|
@ -22,7 +22,7 @@ FUNCARG_PREFIX = (
|
|||
'and scheduled to be removed in pytest 4.0. '
|
||||
'Please remove the prefix and use the @pytest.fixture decorator instead.')
|
||||
|
||||
SETUP_CFG_PYTEST = '[pytest] section in setup.cfg files is deprecated, use [tool:pytest] instead.'
|
||||
CFG_PYTEST_SECTION = '[pytest] section in {filename} files is deprecated, use [tool:pytest] instead.'
|
||||
|
||||
GETFUNCARGVALUE = "use of getfuncargvalue is deprecated, use getfixturevalue"
|
||||
|
||||
|
@ -41,6 +41,12 @@ MARK_PARAMETERSET_UNPACKING = RemovedInPytest4Warning(
|
|||
"For more details, see: https://docs.pytest.org/en/latest/parametrize.html"
|
||||
)
|
||||
|
||||
RECORD_XML_PROPERTY = (
|
||||
'Fixture renamed from "record_xml_property" to "record_property" as user '
|
||||
'properties are now available to all reporters.\n'
|
||||
'"record_xml_property" is now deprecated.'
|
||||
)
|
||||
|
||||
COLLECTOR_MAKEITEM = RemovedInPytest4Warning(
|
||||
"pycollector makeitem was removed "
|
||||
"as it is an accidentially leaked internal api"
|
||||
|
@ -50,3 +56,9 @@ METAFUNC_ADD_CALL = (
|
|||
"Metafunc.addcall is deprecated and scheduled to be removed in pytest 4.0.\n"
|
||||
"Please use Metafunc.parametrize instead."
|
||||
)
|
||||
|
||||
PYTEST_PLUGINS_FROM_NON_TOP_LEVEL_CONFTEST = RemovedInPytest4Warning(
|
||||
"Defining pytest_plugins in a non-top-level conftest is deprecated, "
|
||||
"because it affects the entire directory tree in a non-explicit way.\n"
|
||||
"Please move it to the top level conftest file instead."
|
||||
)
|
||||
|
|
|
@ -24,6 +24,9 @@ DOCTEST_REPORT_CHOICES = (
|
|||
DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE,
|
||||
)
|
||||
|
||||
# Lazy definiton of runner class
|
||||
RUNNER_CLASS = None
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
parser.addini('doctest_optionflags', 'option flags for doctests',
|
||||
|
@ -47,6 +50,10 @@ def pytest_addoption(parser):
|
|||
action="store_true", default=False,
|
||||
help="ignore doctest ImportErrors",
|
||||
dest="doctest_ignore_import_errors")
|
||||
group.addoption("--doctest-continue-on-failure",
|
||||
action="store_true", default=False,
|
||||
help="for a given doctest, continue to run after the first failure",
|
||||
dest="doctest_continue_on_failure")
|
||||
|
||||
|
||||
def pytest_collect_file(path, parent):
|
||||
|
@ -77,14 +84,63 @@ def _is_doctest(config, path, parent):
|
|||
|
||||
class ReprFailDoctest(TerminalRepr):
|
||||
|
||||
def __init__(self, reprlocation, lines):
|
||||
self.reprlocation = reprlocation
|
||||
self.lines = lines
|
||||
def __init__(self, reprlocation_lines):
|
||||
# List of (reprlocation, lines) tuples
|
||||
self.reprlocation_lines = reprlocation_lines
|
||||
|
||||
def toterminal(self, tw):
|
||||
for line in self.lines:
|
||||
tw.line(line)
|
||||
self.reprlocation.toterminal(tw)
|
||||
for reprlocation, lines in self.reprlocation_lines:
|
||||
for line in lines:
|
||||
tw.line(line)
|
||||
reprlocation.toterminal(tw)
|
||||
|
||||
|
||||
class MultipleDoctestFailures(Exception):
|
||||
def __init__(self, failures):
|
||||
super(MultipleDoctestFailures, self).__init__()
|
||||
self.failures = failures
|
||||
|
||||
|
||||
def _init_runner_class():
|
||||
import doctest
|
||||
|
||||
class PytestDoctestRunner(doctest.DebugRunner):
|
||||
"""
|
||||
Runner to collect failures. Note that the out variable in this case is
|
||||
a list instead of a stdout-like object
|
||||
"""
|
||||
def __init__(self, checker=None, verbose=None, optionflags=0,
|
||||
continue_on_failure=True):
|
||||
doctest.DebugRunner.__init__(
|
||||
self, checker=checker, verbose=verbose, optionflags=optionflags)
|
||||
self.continue_on_failure = continue_on_failure
|
||||
|
||||
def report_failure(self, out, test, example, got):
|
||||
failure = doctest.DocTestFailure(test, example, got)
|
||||
if self.continue_on_failure:
|
||||
out.append(failure)
|
||||
else:
|
||||
raise failure
|
||||
|
||||
def report_unexpected_exception(self, out, test, example, exc_info):
|
||||
failure = doctest.UnexpectedException(test, example, exc_info)
|
||||
if self.continue_on_failure:
|
||||
out.append(failure)
|
||||
else:
|
||||
raise failure
|
||||
|
||||
return PytestDoctestRunner
|
||||
|
||||
|
||||
def _get_runner(checker=None, verbose=None, optionflags=0,
|
||||
continue_on_failure=True):
|
||||
# We need this in order to do a lazy import on doctest
|
||||
global RUNNER_CLASS
|
||||
if RUNNER_CLASS is None:
|
||||
RUNNER_CLASS = _init_runner_class()
|
||||
return RUNNER_CLASS(
|
||||
checker=checker, verbose=verbose, optionflags=optionflags,
|
||||
continue_on_failure=continue_on_failure)
|
||||
|
||||
|
||||
class DoctestItem(pytest.Item):
|
||||
|
@ -106,7 +162,10 @@ class DoctestItem(pytest.Item):
|
|||
def runtest(self):
|
||||
_check_all_skipped(self.dtest)
|
||||
self._disable_output_capturing_for_darwin()
|
||||
self.runner.run(self.dtest)
|
||||
failures = []
|
||||
self.runner.run(self.dtest, out=failures)
|
||||
if failures:
|
||||
raise MultipleDoctestFailures(failures)
|
||||
|
||||
def _disable_output_capturing_for_darwin(self):
|
||||
"""
|
||||
|
@ -122,42 +181,51 @@ class DoctestItem(pytest.Item):
|
|||
|
||||
def repr_failure(self, excinfo):
|
||||
import doctest
|
||||
failures = None
|
||||
if excinfo.errisinstance((doctest.DocTestFailure,
|
||||
doctest.UnexpectedException)):
|
||||
doctestfailure = excinfo.value
|
||||
example = doctestfailure.example
|
||||
test = doctestfailure.test
|
||||
filename = test.filename
|
||||
if test.lineno is None:
|
||||
lineno = None
|
||||
else:
|
||||
lineno = test.lineno + example.lineno + 1
|
||||
message = excinfo.type.__name__
|
||||
reprlocation = ReprFileLocation(filename, lineno, message)
|
||||
checker = _get_checker()
|
||||
report_choice = _get_report_choice(self.config.getoption("doctestreport"))
|
||||
if lineno is not None:
|
||||
lines = doctestfailure.test.docstring.splitlines(False)
|
||||
# add line numbers to the left of the error message
|
||||
lines = ["%03d %s" % (i + test.lineno + 1, x)
|
||||
for (i, x) in enumerate(lines)]
|
||||
# trim docstring error lines to 10
|
||||
lines = lines[max(example.lineno - 9, 0):example.lineno + 1]
|
||||
else:
|
||||
lines = ['EXAMPLE LOCATION UNKNOWN, not showing all tests of that example']
|
||||
indent = '>>>'
|
||||
for line in example.source.splitlines():
|
||||
lines.append('??? %s %s' % (indent, line))
|
||||
indent = '...'
|
||||
if excinfo.errisinstance(doctest.DocTestFailure):
|
||||
lines += checker.output_difference(example,
|
||||
doctestfailure.got, report_choice).split("\n")
|
||||
else:
|
||||
inner_excinfo = ExceptionInfo(excinfo.value.exc_info)
|
||||
lines += ["UNEXPECTED EXCEPTION: %s" %
|
||||
repr(inner_excinfo.value)]
|
||||
lines += traceback.format_exception(*excinfo.value.exc_info)
|
||||
return ReprFailDoctest(reprlocation, lines)
|
||||
failures = [excinfo.value]
|
||||
elif excinfo.errisinstance(MultipleDoctestFailures):
|
||||
failures = excinfo.value.failures
|
||||
|
||||
if failures is not None:
|
||||
reprlocation_lines = []
|
||||
for failure in failures:
|
||||
example = failure.example
|
||||
test = failure.test
|
||||
filename = test.filename
|
||||
if test.lineno is None:
|
||||
lineno = None
|
||||
else:
|
||||
lineno = test.lineno + example.lineno + 1
|
||||
message = type(failure).__name__
|
||||
reprlocation = ReprFileLocation(filename, lineno, message)
|
||||
checker = _get_checker()
|
||||
report_choice = _get_report_choice(self.config.getoption("doctestreport"))
|
||||
if lineno is not None:
|
||||
lines = failure.test.docstring.splitlines(False)
|
||||
# add line numbers to the left of the error message
|
||||
lines = ["%03d %s" % (i + test.lineno + 1, x)
|
||||
for (i, x) in enumerate(lines)]
|
||||
# trim docstring error lines to 10
|
||||
lines = lines[max(example.lineno - 9, 0):example.lineno + 1]
|
||||
else:
|
||||
lines = ['EXAMPLE LOCATION UNKNOWN, not showing all tests of that example']
|
||||
indent = '>>>'
|
||||
for line in example.source.splitlines():
|
||||
lines.append('??? %s %s' % (indent, line))
|
||||
indent = '...'
|
||||
if isinstance(failure, doctest.DocTestFailure):
|
||||
lines += checker.output_difference(example,
|
||||
failure.got,
|
||||
report_choice).split("\n")
|
||||
else:
|
||||
inner_excinfo = ExceptionInfo(failure.exc_info)
|
||||
lines += ["UNEXPECTED EXCEPTION: %s" %
|
||||
repr(inner_excinfo.value)]
|
||||
lines += traceback.format_exception(*failure.exc_info)
|
||||
reprlocation_lines.append((reprlocation, lines))
|
||||
return ReprFailDoctest(reprlocation_lines)
|
||||
else:
|
||||
return super(DoctestItem, self).repr_failure(excinfo)
|
||||
|
||||
|
@ -187,6 +255,16 @@ def get_optionflags(parent):
|
|||
return flag_acc
|
||||
|
||||
|
||||
def _get_continue_on_failure(config):
|
||||
continue_on_failure = config.getvalue('doctest_continue_on_failure')
|
||||
if continue_on_failure:
|
||||
# We need to turn off this if we use pdb since we should stop at
|
||||
# the first failure
|
||||
if config.getvalue("usepdb"):
|
||||
continue_on_failure = False
|
||||
return continue_on_failure
|
||||
|
||||
|
||||
class DoctestTextfile(pytest.Module):
|
||||
obj = None
|
||||
|
||||
|
@ -202,8 +280,11 @@ class DoctestTextfile(pytest.Module):
|
|||
globs = {'__name__': '__main__'}
|
||||
|
||||
optionflags = get_optionflags(self)
|
||||
runner = doctest.DebugRunner(verbose=0, optionflags=optionflags,
|
||||
checker=_get_checker())
|
||||
|
||||
runner = _get_runner(
|
||||
verbose=0, optionflags=optionflags,
|
||||
checker=_get_checker(),
|
||||
continue_on_failure=_get_continue_on_failure(self.config))
|
||||
_fix_spoof_python2(runner, encoding)
|
||||
|
||||
parser = doctest.DocTestParser()
|
||||
|
@ -238,8 +319,10 @@ class DoctestModule(pytest.Module):
|
|||
# uses internal doctest module parsing mechanism
|
||||
finder = doctest.DocTestFinder()
|
||||
optionflags = get_optionflags(self)
|
||||
runner = doctest.DebugRunner(verbose=0, optionflags=optionflags,
|
||||
checker=_get_checker())
|
||||
runner = _get_runner(
|
||||
verbose=0, optionflags=optionflags,
|
||||
checker=_get_checker(),
|
||||
continue_on_failure=_get_continue_on_failure(self.config))
|
||||
|
||||
for test in finder.find(module, module.__name__):
|
||||
if test.examples: # skip empty doctests
|
||||
|
|
|
@ -844,9 +844,9 @@ def _ensure_immutable_ids(ids):
|
|||
@attr.s(frozen=True)
|
||||
class FixtureFunctionMarker(object):
|
||||
scope = attr.ib()
|
||||
params = attr.ib(convert=attr.converters.optional(tuple))
|
||||
params = attr.ib(converter=attr.converters.optional(tuple))
|
||||
autouse = attr.ib(default=False)
|
||||
ids = attr.ib(default=None, convert=_ensure_immutable_ids)
|
||||
ids = attr.ib(default=None, converter=_ensure_immutable_ids)
|
||||
name = attr.ib(default=None)
|
||||
|
||||
def __call__(self, function):
|
||||
|
@ -1021,9 +1021,6 @@ class FixtureManager(object):
|
|||
if nextchar and nextchar not in ":/":
|
||||
continue
|
||||
autousenames.extend(basenames)
|
||||
# make sure autousenames are sorted by scope, scopenum 0 is session
|
||||
autousenames.sort(
|
||||
key=lambda x: self._arg2fixturedefs[x][-1].scopenum)
|
||||
return autousenames
|
||||
|
||||
def getfixtureclosure(self, fixturenames, parentnode):
|
||||
|
@ -1054,6 +1051,16 @@ class FixtureManager(object):
|
|||
if fixturedefs:
|
||||
arg2fixturedefs[argname] = fixturedefs
|
||||
merge(fixturedefs[-1].argnames)
|
||||
|
||||
def sort_by_scope(arg_name):
|
||||
try:
|
||||
fixturedefs = arg2fixturedefs[arg_name]
|
||||
except KeyError:
|
||||
return scopes.index('function')
|
||||
else:
|
||||
return fixturedefs[-1].scopenum
|
||||
|
||||
fixturenames_closure.sort(key=sort_by_scope)
|
||||
return fixturenames_closure, arg2fixturedefs
|
||||
|
||||
def pytest_generate_tests(self, metafunc):
|
||||
|
|
|
@ -490,7 +490,14 @@ def pytest_report_teststatus(report):
|
|||
|
||||
|
||||
def pytest_terminal_summary(terminalreporter, exitstatus):
|
||||
""" add additional section in terminal summary reporting. """
|
||||
"""Add a section to terminal summary reporting.
|
||||
|
||||
:param _pytest.terminal.TerminalReporter terminalreporter: the internal terminal reporter object
|
||||
:param int exitstatus: the exit status that will be reported back to the OS
|
||||
|
||||
.. versionadded:: 3.5
|
||||
The ``config`` parameter.
|
||||
"""
|
||||
|
||||
|
||||
@hookspec(historic=True)
|
||||
|
|
|
@ -130,10 +130,47 @@ class _NodeReporter(object):
|
|||
self.append(node)
|
||||
|
||||
def write_captured_output(self, report):
|
||||
for capname in ('out', 'err'):
|
||||
content = getattr(report, 'capstd' + capname)
|
||||
content_out = report.capstdout
|
||||
content_log = report.caplog
|
||||
content_err = report.capstderr
|
||||
|
||||
if content_log or content_out:
|
||||
if content_log and self.xml.logging == 'system-out':
|
||||
if content_out:
|
||||
# syncing stdout and the log-output is not done yet. It's
|
||||
# probably not worth the effort. Therefore, first the captured
|
||||
# stdout is shown and then the captured logs.
|
||||
content = '\n'.join([
|
||||
' Captured Stdout '.center(80, '-'),
|
||||
content_out,
|
||||
'',
|
||||
' Captured Log '.center(80, '-'),
|
||||
content_log])
|
||||
else:
|
||||
content = content_log
|
||||
else:
|
||||
content = content_out
|
||||
|
||||
if content:
|
||||
tag = getattr(Junit, 'system-' + capname)
|
||||
tag = getattr(Junit, 'system-out')
|
||||
self.append(tag(bin_xml_escape(content)))
|
||||
|
||||
if content_log or content_err:
|
||||
if content_log and self.xml.logging == 'system-err':
|
||||
if content_err:
|
||||
content = '\n'.join([
|
||||
' Captured Stderr '.center(80, '-'),
|
||||
content_err,
|
||||
'',
|
||||
' Captured Log '.center(80, '-'),
|
||||
content_log])
|
||||
else:
|
||||
content = content_log
|
||||
else:
|
||||
content = content_err
|
||||
|
||||
if content:
|
||||
tag = getattr(Junit, 'system-err')
|
||||
self.append(tag(bin_xml_escape(content)))
|
||||
|
||||
def append_pass(self, report):
|
||||
|
@ -196,36 +233,47 @@ class _NodeReporter(object):
|
|||
|
||||
|
||||
@pytest.fixture
|
||||
def record_xml_property(request):
|
||||
"""Add extra xml properties to the tag for the calling test.
|
||||
def record_property(request):
|
||||
"""Add an extra properties the calling test.
|
||||
User properties become part of the test report and are available to the
|
||||
configured reporters, like JUnit XML.
|
||||
The fixture is callable with ``(name, value)``, with value being automatically
|
||||
xml-encoded.
|
||||
|
||||
Example::
|
||||
|
||||
def test_function(record_xml_property):
|
||||
record_xml_property("example_key", 1)
|
||||
def test_function(record_property):
|
||||
record_property("example_key", 1)
|
||||
"""
|
||||
request.node.warn(
|
||||
code='C3',
|
||||
message='record_xml_property is an experimental feature',
|
||||
message='record_property is an experimental feature',
|
||||
)
|
||||
xml = getattr(request.config, "_xml", None)
|
||||
if xml is not None:
|
||||
node_reporter = xml.node_reporter(request.node.nodeid)
|
||||
return node_reporter.add_property
|
||||
else:
|
||||
def add_property_noop(name, value):
|
||||
pass
|
||||
|
||||
return add_property_noop
|
||||
def append_property(name, value):
|
||||
request.node.user_properties.append((name, value))
|
||||
return append_property
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def record_xml_property(record_property):
|
||||
"""(Deprecated) use record_property."""
|
||||
import warnings
|
||||
from _pytest import deprecated
|
||||
warnings.warn(
|
||||
deprecated.RECORD_XML_PROPERTY,
|
||||
DeprecationWarning,
|
||||
stacklevel=2
|
||||
)
|
||||
|
||||
return record_property
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def record_xml_attribute(request):
|
||||
"""Add extra xml attributes to the tag for the calling test.
|
||||
The fixture is callable with ``(name, value)``, with value being automatically
|
||||
xml-encoded
|
||||
The fixture is callable with ``(name, value)``, with value being
|
||||
automatically xml-encoded
|
||||
"""
|
||||
request.node.warn(
|
||||
code='C3',
|
||||
|
@ -259,13 +307,18 @@ def pytest_addoption(parser):
|
|||
default=None,
|
||||
help="prepend prefix to classnames in junit-xml output")
|
||||
parser.addini("junit_suite_name", "Test suite name for JUnit report", default="pytest")
|
||||
parser.addini("junit_logging", "Write captured log messages to JUnit report: "
|
||||
"one of no|system-out|system-err",
|
||||
default="no") # choices=['no', 'stdout', 'stderr'])
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
xmlpath = config.option.xmlpath
|
||||
# prevent opening xmllog on slave nodes (xdist)
|
||||
if xmlpath and not hasattr(config, 'slaveinput'):
|
||||
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini("junit_suite_name"))
|
||||
config._xml = LogXML(xmlpath, config.option.junitprefix,
|
||||
config.getini("junit_suite_name"),
|
||||
config.getini("junit_logging"))
|
||||
config.pluginmanager.register(config._xml)
|
||||
|
||||
|
||||
|
@ -292,11 +345,12 @@ def mangle_test_address(address):
|
|||
|
||||
|
||||
class LogXML(object):
|
||||
def __init__(self, logfile, prefix, suite_name="pytest"):
|
||||
def __init__(self, logfile, prefix, suite_name="pytest", logging="no"):
|
||||
logfile = os.path.expanduser(os.path.expandvars(logfile))
|
||||
self.logfile = os.path.normpath(os.path.abspath(logfile))
|
||||
self.prefix = prefix
|
||||
self.suite_name = suite_name
|
||||
self.logging = logging
|
||||
self.stats = dict.fromkeys([
|
||||
'error',
|
||||
'passed',
|
||||
|
@ -404,6 +458,10 @@ class LogXML(object):
|
|||
if report.when == "teardown":
|
||||
reporter = self._opentestcase(report)
|
||||
reporter.write_captured_output(report)
|
||||
|
||||
for propname, propvalue in report.user_properties:
|
||||
reporter.add_property(propname, propvalue)
|
||||
|
||||
self.finalize(report)
|
||||
report_wid = getattr(report, "worker_id", None)
|
||||
report_ii = getattr(report, "item_index", None)
|
||||
|
|
|
@ -347,7 +347,7 @@ class LoggingPlugin(object):
|
|||
self._config = config
|
||||
|
||||
# enable verbose output automatically if live logging is enabled
|
||||
if self._config.getini('log_cli') and not config.getoption('verbose'):
|
||||
if self._log_cli_enabled() and not config.getoption('verbose'):
|
||||
# sanity check: terminal reporter should not have been loaded at this point
|
||||
assert self._config.pluginmanager.get_plugin('terminalreporter') is None
|
||||
config.option.verbose = 1
|
||||
|
@ -373,6 +373,13 @@ class LoggingPlugin(object):
|
|||
# initialized during pytest_runtestloop
|
||||
self.log_cli_handler = None
|
||||
|
||||
def _log_cli_enabled(self):
|
||||
"""Return True if log_cli should be considered enabled, either explicitly
|
||||
or because --log-cli-level was given in the command-line.
|
||||
"""
|
||||
return self._config.getoption('--log-cli-level') is not None or \
|
||||
self._config.getini('log_cli')
|
||||
|
||||
@contextmanager
|
||||
def _runtest_for(self, item, when):
|
||||
"""Implements the internals of pytest_runtest_xxx() hook."""
|
||||
|
@ -380,6 +387,11 @@ class LoggingPlugin(object):
|
|||
formatter=self.formatter, level=self.log_level) as log_handler:
|
||||
if self.log_cli_handler:
|
||||
self.log_cli_handler.set_when(when)
|
||||
|
||||
if item is None:
|
||||
yield # run the test
|
||||
return
|
||||
|
||||
if not hasattr(item, 'catch_log_handlers'):
|
||||
item.catch_log_handlers = {}
|
||||
item.catch_log_handlers[when] = log_handler
|
||||
|
@ -411,9 +423,17 @@ class LoggingPlugin(object):
|
|||
with self._runtest_for(item, 'teardown'):
|
||||
yield
|
||||
|
||||
@pytest.hookimpl(hookwrapper=True)
|
||||
def pytest_runtest_logstart(self):
|
||||
if self.log_cli_handler:
|
||||
self.log_cli_handler.reset()
|
||||
with self._runtest_for(None, 'start'):
|
||||
yield
|
||||
|
||||
@pytest.hookimpl(hookwrapper=True)
|
||||
def pytest_runtest_logfinish(self):
|
||||
with self._runtest_for(None, 'finish'):
|
||||
yield
|
||||
|
||||
@pytest.hookimpl(hookwrapper=True)
|
||||
def pytest_runtestloop(self, session):
|
||||
|
@ -434,7 +454,7 @@ class LoggingPlugin(object):
|
|||
This must be done right before starting the loop so we can access the terminal reporter plugin.
|
||||
"""
|
||||
terminal_reporter = self._config.pluginmanager.get_plugin('terminalreporter')
|
||||
if self._config.getini('log_cli') and terminal_reporter is not None:
|
||||
if self._log_cli_enabled() and terminal_reporter is not None:
|
||||
capture_manager = self._config.pluginmanager.get_plugin('capturemanager')
|
||||
log_cli_handler = _LiveLoggingStreamHandler(terminal_reporter, capture_manager)
|
||||
log_cli_format = get_option_ini(self._config, 'log_cli_format', 'log_format')
|
||||
|
@ -469,6 +489,7 @@ class _LiveLoggingStreamHandler(logging.StreamHandler):
|
|||
self.capture_manager = capture_manager
|
||||
self.reset()
|
||||
self.set_when(None)
|
||||
self._test_outcome_written = False
|
||||
|
||||
def reset(self):
|
||||
"""Reset the handler; should be called before the start of each test"""
|
||||
|
@ -478,14 +499,20 @@ class _LiveLoggingStreamHandler(logging.StreamHandler):
|
|||
"""Prepares for the given test phase (setup/call/teardown)"""
|
||||
self._when = when
|
||||
self._section_name_shown = False
|
||||
if when == 'start':
|
||||
self._test_outcome_written = False
|
||||
|
||||
def emit(self, record):
|
||||
if self.capture_manager is not None:
|
||||
self.capture_manager.suspend_global_capture()
|
||||
try:
|
||||
if not self._first_record_emitted or self._when == 'teardown':
|
||||
if not self._first_record_emitted:
|
||||
self.stream.write('\n')
|
||||
self._first_record_emitted = True
|
||||
elif self._when in ('teardown', 'finish'):
|
||||
if not self._test_outcome_written:
|
||||
self._test_outcome_written = True
|
||||
self.stream.write('\n')
|
||||
if not self._section_name_shown and self._when:
|
||||
self.stream.section('live log ' + self._when, sep='-', bold=True)
|
||||
self._section_name_shown = True
|
||||
|
|
|
@ -53,6 +53,11 @@ def pytest_addoption(parser):
|
|||
group._addoption("--continue-on-collection-errors", action="store_true",
|
||||
default=False, dest="continue_on_collection_errors",
|
||||
help="Force test execution even if collection errors occur.")
|
||||
group._addoption("--rootdir", action="store",
|
||||
dest="rootdir",
|
||||
help="Define root directory for tests. Can be relative path: 'root_dir', './root_dir', "
|
||||
"'root_dir/another_dir/'; absolute path: '/home/user/root_dir'; path with variables: "
|
||||
"'$HOME/root_dir'.")
|
||||
|
||||
group = parser.getgroup("collect", "collection")
|
||||
group.addoption('--collectonly', '--collect-only', action="store_true",
|
||||
|
@ -61,6 +66,8 @@ def pytest_addoption(parser):
|
|||
help="try to interpret all arguments as python packages.")
|
||||
group.addoption("--ignore", action="append", metavar="path",
|
||||
help="ignore path during collection (multi-allowed).")
|
||||
group.addoption("--deselect", action="append", metavar="nodeid_prefix",
|
||||
help="deselect item during collection (multi-allowed).")
|
||||
# when changing this to --conf-cut-dir, config.py Conftest.setinitial
|
||||
# needs upgrading as well
|
||||
group.addoption('--confcutdir', dest="confcutdir", default=None,
|
||||
|
@ -203,6 +210,24 @@ def pytest_ignore_collect(path, config):
|
|||
return False
|
||||
|
||||
|
||||
def pytest_collection_modifyitems(items, config):
|
||||
deselect_prefixes = tuple(config.getoption("deselect") or [])
|
||||
if not deselect_prefixes:
|
||||
return
|
||||
|
||||
remaining = []
|
||||
deselected = []
|
||||
for colitem in items:
|
||||
if colitem.nodeid.startswith(deselect_prefixes):
|
||||
deselected.append(colitem)
|
||||
else:
|
||||
remaining.append(colitem)
|
||||
|
||||
if deselected:
|
||||
config.hook.pytest_deselected(items=deselected)
|
||||
items[:] = remaining
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def _patched_find_module():
|
||||
"""Patch bug in pkgutil.ImpImporter.find_module
|
||||
|
@ -275,7 +300,7 @@ class Session(nodes.FSCollector):
|
|||
def __init__(self, config):
|
||||
nodes.FSCollector.__init__(
|
||||
self, config.rootdir, parent=None,
|
||||
config=config, session=self)
|
||||
config=config, session=self, nodeid="")
|
||||
self.testsfailed = 0
|
||||
self.testscollected = 0
|
||||
self.shouldstop = False
|
||||
|
@ -283,10 +308,8 @@ class Session(nodes.FSCollector):
|
|||
self.trace = config.trace.root.get("collection")
|
||||
self._norecursepatterns = config.getini("norecursedirs")
|
||||
self.startdir = py.path.local()
|
||||
self.config.pluginmanager.register(self, name="session")
|
||||
|
||||
def _makeid(self):
|
||||
return ""
|
||||
self.config.pluginmanager.register(self, name="session")
|
||||
|
||||
@hookimpl(tryfirst=True)
|
||||
def pytest_collectstart(self):
|
||||
|
|
|
@ -0,0 +1,156 @@
|
|||
""" generic mechanism for marking and selecting python functions. """
|
||||
from __future__ import absolute_import, division, print_function
|
||||
from _pytest.config import UsageError
|
||||
from .structures import (
|
||||
ParameterSet, EMPTY_PARAMETERSET_OPTION, MARK_GEN,
|
||||
Mark, MarkInfo, MarkDecorator, MarkGenerator,
|
||||
transfer_markers, get_empty_parameterset_mark
|
||||
)
|
||||
from .legacy import matchkeyword, matchmark
|
||||
|
||||
__all__ = [
|
||||
'Mark', 'MarkInfo', 'MarkDecorator', 'MarkGenerator',
|
||||
'transfer_markers', 'get_empty_parameterset_mark'
|
||||
]
|
||||
|
||||
|
||||
class MarkerError(Exception):
|
||||
|
||||
"""Error in use of a pytest marker/attribute."""
|
||||
|
||||
|
||||
def param(*values, **kw):
|
||||
"""Specify a parameter in a `pytest.mark.parametrize`_ call.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@pytest.mark.parametrize("test_input,expected", [
|
||||
("3+5", 8),
|
||||
pytest.param("6*9", 42, marks=pytest.mark.xfail),
|
||||
])
|
||||
def test_eval(test_input, expected):
|
||||
assert eval(test_input) == expected
|
||||
|
||||
:param values: variable args of the values of the parameter set, in order.
|
||||
:keyword marks: a single mark or a list of marks to be applied to this parameter set.
|
||||
:keyword str id: the id to attribute to this parameter set.
|
||||
"""
|
||||
return ParameterSet.param(*values, **kw)
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
group = parser.getgroup("general")
|
||||
group._addoption(
|
||||
'-k',
|
||||
action="store", dest="keyword", default='', metavar="EXPRESSION",
|
||||
help="only run tests which match the given substring expression. "
|
||||
"An expression is a python evaluatable expression "
|
||||
"where all names are substring-matched against test names "
|
||||
"and their parent classes. Example: -k 'test_method or test_"
|
||||
"other' matches all test functions and classes whose name "
|
||||
"contains 'test_method' or 'test_other', while -k 'not test_method' "
|
||||
"matches those that don't contain 'test_method' in their names. "
|
||||
"Additionally keywords are matched to classes and functions "
|
||||
"containing extra names in their 'extra_keyword_matches' set, "
|
||||
"as well as functions which have names assigned directly to them."
|
||||
)
|
||||
|
||||
group._addoption(
|
||||
"-m",
|
||||
action="store", dest="markexpr", default="", metavar="MARKEXPR",
|
||||
help="only run tests matching given mark expression. "
|
||||
"example: -m 'mark1 and not mark2'."
|
||||
)
|
||||
|
||||
group.addoption(
|
||||
"--markers", action="store_true",
|
||||
help="show markers (builtin, plugin and per-project ones)."
|
||||
)
|
||||
|
||||
parser.addini("markers", "markers for test functions", 'linelist')
|
||||
parser.addini(
|
||||
EMPTY_PARAMETERSET_OPTION,
|
||||
"default marker for empty parametersets")
|
||||
|
||||
|
||||
def pytest_cmdline_main(config):
|
||||
import _pytest.config
|
||||
if config.option.markers:
|
||||
config._do_configure()
|
||||
tw = _pytest.config.create_terminal_writer(config)
|
||||
for line in config.getini("markers"):
|
||||
parts = line.split(":", 1)
|
||||
name = parts[0]
|
||||
rest = parts[1] if len(parts) == 2 else ''
|
||||
tw.write("@pytest.mark.%s:" % name, bold=True)
|
||||
tw.line(rest)
|
||||
tw.line()
|
||||
config._ensure_unconfigure()
|
||||
return 0
|
||||
|
||||
|
||||
pytest_cmdline_main.tryfirst = True
|
||||
|
||||
|
||||
def deselect_by_keyword(items, config):
|
||||
keywordexpr = config.option.keyword.lstrip()
|
||||
if keywordexpr.startswith("-"):
|
||||
keywordexpr = "not " + keywordexpr[1:]
|
||||
selectuntil = False
|
||||
if keywordexpr[-1:] == ":":
|
||||
selectuntil = True
|
||||
keywordexpr = keywordexpr[:-1]
|
||||
|
||||
remaining = []
|
||||
deselected = []
|
||||
for colitem in items:
|
||||
if keywordexpr and not matchkeyword(colitem, keywordexpr):
|
||||
deselected.append(colitem)
|
||||
else:
|
||||
if selectuntil:
|
||||
keywordexpr = None
|
||||
remaining.append(colitem)
|
||||
|
||||
if deselected:
|
||||
config.hook.pytest_deselected(items=deselected)
|
||||
items[:] = remaining
|
||||
|
||||
|
||||
def deselect_by_mark(items, config):
|
||||
matchexpr = config.option.markexpr
|
||||
if not matchexpr:
|
||||
return
|
||||
|
||||
remaining = []
|
||||
deselected = []
|
||||
for item in items:
|
||||
if matchmark(item, matchexpr):
|
||||
remaining.append(item)
|
||||
else:
|
||||
deselected.append(item)
|
||||
|
||||
if deselected:
|
||||
config.hook.pytest_deselected(items=deselected)
|
||||
items[:] = remaining
|
||||
|
||||
|
||||
def pytest_collection_modifyitems(items, config):
|
||||
deselect_by_keyword(items, config)
|
||||
deselect_by_mark(items, config)
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
config._old_mark_config = MARK_GEN._config
|
||||
if config.option.strict:
|
||||
MARK_GEN._config = config
|
||||
|
||||
empty_parameterset = config.getini(EMPTY_PARAMETERSET_OPTION)
|
||||
|
||||
if empty_parameterset not in ('skip', 'xfail', None, ''):
|
||||
raise UsageError(
|
||||
"{!s} must be one of skip and xfail,"
|
||||
" but it is {!r}".format(EMPTY_PARAMETERSET_OPTION, empty_parameterset))
|
||||
|
||||
|
||||
def pytest_unconfigure(config):
|
||||
MARK_GEN._config = getattr(config, '_old_mark_config', None)
|
|
@ -0,0 +1,126 @@
|
|||
import os
|
||||
import six
|
||||
import sys
|
||||
import platform
|
||||
import traceback
|
||||
|
||||
from . import MarkDecorator, MarkInfo
|
||||
from ..outcomes import fail, TEST_OUTCOME
|
||||
|
||||
|
||||
def cached_eval(config, expr, d):
|
||||
if not hasattr(config, '_evalcache'):
|
||||
config._evalcache = {}
|
||||
try:
|
||||
return config._evalcache[expr]
|
||||
except KeyError:
|
||||
import _pytest._code
|
||||
exprcode = _pytest._code.compile(expr, mode="eval")
|
||||
config._evalcache[expr] = x = eval(exprcode, d)
|
||||
return x
|
||||
|
||||
|
||||
class MarkEvaluator(object):
|
||||
def __init__(self, item, name):
|
||||
self.item = item
|
||||
self._marks = None
|
||||
self._mark = None
|
||||
self._mark_name = name
|
||||
|
||||
def __bool__(self):
|
||||
self._marks = self._get_marks()
|
||||
return bool(self._marks)
|
||||
__nonzero__ = __bool__
|
||||
|
||||
def wasvalid(self):
|
||||
return not hasattr(self, 'exc')
|
||||
|
||||
def _get_marks(self):
|
||||
|
||||
keyword = self.item.keywords.get(self._mark_name)
|
||||
if isinstance(keyword, MarkDecorator):
|
||||
return [keyword.mark]
|
||||
elif isinstance(keyword, MarkInfo):
|
||||
return [x.combined for x in keyword]
|
||||
else:
|
||||
return []
|
||||
|
||||
def invalidraise(self, exc):
|
||||
raises = self.get('raises')
|
||||
if not raises:
|
||||
return
|
||||
return not isinstance(exc, raises)
|
||||
|
||||
def istrue(self):
|
||||
try:
|
||||
return self._istrue()
|
||||
except TEST_OUTCOME:
|
||||
self.exc = sys.exc_info()
|
||||
if isinstance(self.exc[1], SyntaxError):
|
||||
msg = [" " * (self.exc[1].offset + 4) + "^", ]
|
||||
msg.append("SyntaxError: invalid syntax")
|
||||
else:
|
||||
msg = traceback.format_exception_only(*self.exc[:2])
|
||||
fail("Error evaluating %r expression\n"
|
||||
" %s\n"
|
||||
"%s"
|
||||
% (self._mark_name, self.expr, "\n".join(msg)),
|
||||
pytrace=False)
|
||||
|
||||
def _getglobals(self):
|
||||
d = {'os': os, 'sys': sys, 'platform': platform, 'config': self.item.config}
|
||||
if hasattr(self.item, 'obj'):
|
||||
d.update(self.item.obj.__globals__)
|
||||
return d
|
||||
|
||||
def _istrue(self):
|
||||
if hasattr(self, 'result'):
|
||||
return self.result
|
||||
self._marks = self._get_marks()
|
||||
|
||||
if self._marks:
|
||||
self.result = False
|
||||
for mark in self._marks:
|
||||
self._mark = mark
|
||||
if 'condition' in mark.kwargs:
|
||||
args = (mark.kwargs['condition'],)
|
||||
else:
|
||||
args = mark.args
|
||||
|
||||
for expr in args:
|
||||
self.expr = expr
|
||||
if isinstance(expr, six.string_types):
|
||||
d = self._getglobals()
|
||||
result = cached_eval(self.item.config, expr, d)
|
||||
else:
|
||||
if "reason" not in mark.kwargs:
|
||||
# XXX better be checked at collection time
|
||||
msg = "you need to specify reason=STRING " \
|
||||
"when using booleans as conditions."
|
||||
fail(msg)
|
||||
result = bool(expr)
|
||||
if result:
|
||||
self.result = True
|
||||
self.reason = mark.kwargs.get('reason', None)
|
||||
self.expr = expr
|
||||
return self.result
|
||||
|
||||
if not args:
|
||||
self.result = True
|
||||
self.reason = mark.kwargs.get('reason', None)
|
||||
return self.result
|
||||
return False
|
||||
|
||||
def get(self, attr, default=None):
|
||||
if self._mark is None:
|
||||
return default
|
||||
return self._mark.kwargs.get(attr, default)
|
||||
|
||||
def getexplanation(self):
|
||||
expl = getattr(self, 'reason', None) or self.get('reason', None)
|
||||
if not expl:
|
||||
if not hasattr(self, 'expr'):
|
||||
return ""
|
||||
else:
|
||||
return "condition: " + str(self.expr)
|
||||
return expl
|
|
@ -0,0 +1,97 @@
|
|||
"""
|
||||
this is a place where we put datastructures used by legacy apis
|
||||
we hope ot remove
|
||||
"""
|
||||
import attr
|
||||
import keyword
|
||||
|
||||
from . import MarkInfo, MarkDecorator
|
||||
|
||||
from _pytest.config import UsageError
|
||||
|
||||
|
||||
@attr.s
|
||||
class MarkMapping(object):
|
||||
"""Provides a local mapping for markers where item access
|
||||
resolves to True if the marker is present. """
|
||||
|
||||
own_mark_names = attr.ib()
|
||||
|
||||
@classmethod
|
||||
def from_keywords(cls, keywords):
|
||||
mark_names = set()
|
||||
for key, value in keywords.items():
|
||||
if isinstance(value, MarkInfo) or isinstance(value, MarkDecorator):
|
||||
mark_names.add(key)
|
||||
return cls(mark_names)
|
||||
|
||||
def __getitem__(self, name):
|
||||
return name in self.own_mark_names
|
||||
|
||||
|
||||
class KeywordMapping(object):
|
||||
"""Provides a local mapping for keywords.
|
||||
Given a list of names, map any substring of one of these names to True.
|
||||
"""
|
||||
|
||||
def __init__(self, names):
|
||||
self._names = names
|
||||
|
||||
@classmethod
|
||||
def from_item(cls, item):
|
||||
mapped_names = set()
|
||||
|
||||
# Add the names of the current item and any parent items
|
||||
import pytest
|
||||
for item in item.listchain():
|
||||
if not isinstance(item, pytest.Instance):
|
||||
mapped_names.add(item.name)
|
||||
|
||||
# Add the names added as extra keywords to current or parent items
|
||||
for name in item.listextrakeywords():
|
||||
mapped_names.add(name)
|
||||
|
||||
# Add the names attached to the current function through direct assignment
|
||||
if hasattr(item, 'function'):
|
||||
for name in item.function.__dict__:
|
||||
mapped_names.add(name)
|
||||
|
||||
return cls(mapped_names)
|
||||
|
||||
def __getitem__(self, subname):
|
||||
for name in self._names:
|
||||
if subname in name:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
python_keywords_allowed_list = ["or", "and", "not"]
|
||||
|
||||
|
||||
def matchmark(colitem, markexpr):
|
||||
"""Tries to match on any marker names, attached to the given colitem."""
|
||||
return eval(markexpr, {}, MarkMapping.from_keywords(colitem.keywords))
|
||||
|
||||
|
||||
def matchkeyword(colitem, keywordexpr):
|
||||
"""Tries to match given keyword expression to given collector item.
|
||||
|
||||
Will match on the name of colitem, including the names of its parents.
|
||||
Only matches names of items which are either a :class:`Class` or a
|
||||
:class:`Function`.
|
||||
Additionally, matches on names in the 'extra_keyword_matches' set of
|
||||
any item, as well as names directly assigned to test functions.
|
||||
"""
|
||||
mapping = KeywordMapping.from_item(colitem)
|
||||
if " " not in keywordexpr:
|
||||
# special case to allow for simple "-k pass" and "-k 1.3"
|
||||
return mapping[keywordexpr]
|
||||
elif keywordexpr.startswith("not ") and " " not in keywordexpr[4:]:
|
||||
return not mapping[keywordexpr[4:]]
|
||||
for kwd in keywordexpr.split():
|
||||
if keyword.iskeyword(kwd) and kwd not in python_keywords_allowed_list:
|
||||
raise UsageError("Python keyword '{}' not accepted in expressions passed to '-k'".format(kwd))
|
||||
try:
|
||||
return eval(keywordexpr, {}, mapping)
|
||||
except SyntaxError:
|
||||
raise UsageError("Wrong expression passed to '-k': {}".format(keywordexpr))
|
|
@ -1,17 +1,13 @@
|
|||
""" generic mechanism for marking and selecting python functions. """
|
||||
from __future__ import absolute_import, division, print_function
|
||||
|
||||
import inspect
|
||||
import keyword
|
||||
from collections import namedtuple, MutableMapping as MappingMixin
|
||||
import warnings
|
||||
import attr
|
||||
from collections import namedtuple
|
||||
from operator import attrgetter
|
||||
import inspect
|
||||
|
||||
import attr
|
||||
from ..deprecated import MARK_PARAMETERSET_UNPACKING
|
||||
from ..compat import NOTSET, getfslineno
|
||||
from six.moves import map
|
||||
|
||||
from _pytest.config import UsageError
|
||||
from .deprecated import MARK_PARAMETERSET_UNPACKING
|
||||
from .compat import NOTSET, getfslineno
|
||||
|
||||
EMPTY_PARAMETERSET_OPTION = "empty_parameter_set_mark"
|
||||
|
||||
|
@ -26,6 +22,25 @@ def alias(name, warning=None):
|
|||
return property(getter if warning is None else warned, doc='alias for ' + name)
|
||||
|
||||
|
||||
def istestfunc(func):
|
||||
return hasattr(func, "__call__") and \
|
||||
getattr(func, "__name__", "<lambda>") != "<lambda>"
|
||||
|
||||
|
||||
def get_empty_parameterset_mark(config, argnames, func):
|
||||
requested_mark = config.getini(EMPTY_PARAMETERSET_OPTION)
|
||||
if requested_mark in ('', None, 'skip'):
|
||||
mark = MARK_GEN.skip
|
||||
elif requested_mark == 'xfail':
|
||||
mark = MARK_GEN.xfail(run=False)
|
||||
else:
|
||||
raise LookupError(requested_mark)
|
||||
fs, lineno = getfslineno(func)
|
||||
reason = "got empty parameter set %r, function %s at %s:%d" % (
|
||||
argnames, func.__name__, fs, lineno)
|
||||
return mark(reason=reason)
|
||||
|
||||
|
||||
class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
|
||||
@classmethod
|
||||
def param(cls, *values, **kw):
|
||||
|
@ -38,8 +53,8 @@ class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
|
|||
def param_extract_id(id=None):
|
||||
return id
|
||||
|
||||
id = param_extract_id(**kw)
|
||||
return cls(values, marks, id)
|
||||
id_ = param_extract_id(**kw)
|
||||
return cls(values, marks, id_)
|
||||
|
||||
@classmethod
|
||||
def extract_from(cls, parameterset, legacy_force_tuple=False):
|
||||
|
@ -75,7 +90,7 @@ class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
|
|||
return cls(argval, marks=newmarks, id=None)
|
||||
|
||||
@classmethod
|
||||
def _for_parametrize(cls, argnames, argvalues, function, config):
|
||||
def _for_parametrize(cls, argnames, argvalues, func, config):
|
||||
if not isinstance(argnames, (tuple, list)):
|
||||
argnames = [x.strip() for x in argnames.split(",") if x.strip()]
|
||||
force_tuple = len(argnames) == 1
|
||||
|
@ -87,7 +102,7 @@ class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
|
|||
del argvalues
|
||||
|
||||
if not parameters:
|
||||
mark = get_empty_parameterset_mark(config, argnames, function)
|
||||
mark = get_empty_parameterset_mark(config, argnames, func)
|
||||
parameters.append(ParameterSet(
|
||||
values=(NOTSET,) * len(argnames),
|
||||
marks=[mark],
|
||||
|
@ -96,273 +111,6 @@ class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
|
|||
return argnames, parameters
|
||||
|
||||
|
||||
def get_empty_parameterset_mark(config, argnames, function):
|
||||
requested_mark = config.getini(EMPTY_PARAMETERSET_OPTION)
|
||||
if requested_mark in ('', None, 'skip'):
|
||||
mark = MARK_GEN.skip
|
||||
elif requested_mark == 'xfail':
|
||||
mark = MARK_GEN.xfail(run=False)
|
||||
else:
|
||||
raise LookupError(requested_mark)
|
||||
fs, lineno = getfslineno(function)
|
||||
reason = "got empty parameter set %r, function %s at %s:%d" % (
|
||||
argnames, function.__name__, fs, lineno)
|
||||
return mark(reason=reason)
|
||||
|
||||
|
||||
class MarkerError(Exception):
|
||||
|
||||
"""Error in use of a pytest marker/attribute."""
|
||||
|
||||
|
||||
def param(*values, **kw):
|
||||
"""Specify a parameter in a `pytest.mark.parametrize`_ call.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@pytest.mark.parametrize("test_input,expected", [
|
||||
("3+5", 8),
|
||||
pytest.param("6*9", 42, marks=pytest.mark.xfail),
|
||||
])
|
||||
def test_eval(test_input, expected):
|
||||
assert eval(test_input) == expected
|
||||
|
||||
:param values: variable args of the values of the parameter set, in order.
|
||||
:keyword marks: a single mark or a list of marks to be applied to this parameter set.
|
||||
:keyword str id: the id to attribute to this parameter set.
|
||||
"""
|
||||
return ParameterSet.param(*values, **kw)
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
group = parser.getgroup("general")
|
||||
group._addoption(
|
||||
'-k',
|
||||
action="store", dest="keyword", default='', metavar="EXPRESSION",
|
||||
help="only run tests which match the given substring expression. "
|
||||
"An expression is a python evaluatable expression "
|
||||
"where all names are substring-matched against test names "
|
||||
"and their parent classes. Example: -k 'test_method or test_"
|
||||
"other' matches all test functions and classes whose name "
|
||||
"contains 'test_method' or 'test_other', while -k 'not test_method' "
|
||||
"matches those that don't contain 'test_method' in their names. "
|
||||
"Additionally keywords are matched to classes and functions "
|
||||
"containing extra names in their 'extra_keyword_matches' set, "
|
||||
"as well as functions which have names assigned directly to them."
|
||||
)
|
||||
|
||||
group._addoption(
|
||||
"-m",
|
||||
action="store", dest="markexpr", default="", metavar="MARKEXPR",
|
||||
help="only run tests matching given mark expression. "
|
||||
"example: -m 'mark1 and not mark2'."
|
||||
)
|
||||
|
||||
group.addoption(
|
||||
"--markers", action="store_true",
|
||||
help="show markers (builtin, plugin and per-project ones)."
|
||||
)
|
||||
|
||||
parser.addini("markers", "markers for test functions", 'linelist')
|
||||
parser.addini(
|
||||
EMPTY_PARAMETERSET_OPTION,
|
||||
"default marker for empty parametersets")
|
||||
|
||||
|
||||
def pytest_cmdline_main(config):
|
||||
import _pytest.config
|
||||
if config.option.markers:
|
||||
config._do_configure()
|
||||
tw = _pytest.config.create_terminal_writer(config)
|
||||
for line in config.getini("markers"):
|
||||
parts = line.split(":", 1)
|
||||
name = parts[0]
|
||||
rest = parts[1] if len(parts) == 2 else ''
|
||||
tw.write("@pytest.mark.%s:" % name, bold=True)
|
||||
tw.line(rest)
|
||||
tw.line()
|
||||
config._ensure_unconfigure()
|
||||
return 0
|
||||
|
||||
|
||||
pytest_cmdline_main.tryfirst = True
|
||||
|
||||
|
||||
def pytest_collection_modifyitems(items, config):
|
||||
keywordexpr = config.option.keyword.lstrip()
|
||||
matchexpr = config.option.markexpr
|
||||
if not keywordexpr and not matchexpr:
|
||||
return
|
||||
# pytest used to allow "-" for negating
|
||||
# but today we just allow "-" at the beginning, use "not" instead
|
||||
# we probably remove "-" altogether soon
|
||||
if keywordexpr.startswith("-"):
|
||||
keywordexpr = "not " + keywordexpr[1:]
|
||||
selectuntil = False
|
||||
if keywordexpr[-1:] == ":":
|
||||
selectuntil = True
|
||||
keywordexpr = keywordexpr[:-1]
|
||||
|
||||
remaining = []
|
||||
deselected = []
|
||||
for colitem in items:
|
||||
if keywordexpr and not matchkeyword(colitem, keywordexpr):
|
||||
deselected.append(colitem)
|
||||
else:
|
||||
if selectuntil:
|
||||
keywordexpr = None
|
||||
if matchexpr:
|
||||
if not matchmark(colitem, matchexpr):
|
||||
deselected.append(colitem)
|
||||
continue
|
||||
remaining.append(colitem)
|
||||
|
||||
if deselected:
|
||||
config.hook.pytest_deselected(items=deselected)
|
||||
items[:] = remaining
|
||||
|
||||
|
||||
@attr.s
|
||||
class MarkMapping(object):
|
||||
"""Provides a local mapping for markers where item access
|
||||
resolves to True if the marker is present. """
|
||||
|
||||
own_mark_names = attr.ib()
|
||||
|
||||
@classmethod
|
||||
def from_keywords(cls, keywords):
|
||||
mark_names = set()
|
||||
for key, value in keywords.items():
|
||||
if isinstance(value, MarkInfo) or isinstance(value, MarkDecorator):
|
||||
mark_names.add(key)
|
||||
return cls(mark_names)
|
||||
|
||||
def __getitem__(self, name):
|
||||
return name in self.own_mark_names
|
||||
|
||||
|
||||
class KeywordMapping(object):
|
||||
"""Provides a local mapping for keywords.
|
||||
Given a list of names, map any substring of one of these names to True.
|
||||
"""
|
||||
|
||||
def __init__(self, names):
|
||||
self._names = names
|
||||
|
||||
def __getitem__(self, subname):
|
||||
for name in self._names:
|
||||
if subname in name:
|
||||
return True
|
||||
return False
|
||||
|
||||
|
||||
python_keywords_allowed_list = ["or", "and", "not"]
|
||||
|
||||
|
||||
def matchmark(colitem, markexpr):
|
||||
"""Tries to match on any marker names, attached to the given colitem."""
|
||||
return eval(markexpr, {}, MarkMapping.from_keywords(colitem.keywords))
|
||||
|
||||
|
||||
def matchkeyword(colitem, keywordexpr):
|
||||
"""Tries to match given keyword expression to given collector item.
|
||||
|
||||
Will match on the name of colitem, including the names of its parents.
|
||||
Only matches names of items which are either a :class:`Class` or a
|
||||
:class:`Function`.
|
||||
Additionally, matches on names in the 'extra_keyword_matches' set of
|
||||
any item, as well as names directly assigned to test functions.
|
||||
"""
|
||||
mapped_names = set()
|
||||
|
||||
# Add the names of the current item and any parent items
|
||||
import pytest
|
||||
for item in colitem.listchain():
|
||||
if not isinstance(item, pytest.Instance):
|
||||
mapped_names.add(item.name)
|
||||
|
||||
# Add the names added as extra keywords to current or parent items
|
||||
for name in colitem.listextrakeywords():
|
||||
mapped_names.add(name)
|
||||
|
||||
# Add the names attached to the current function through direct assignment
|
||||
if hasattr(colitem, 'function'):
|
||||
for name in colitem.function.__dict__:
|
||||
mapped_names.add(name)
|
||||
|
||||
mapping = KeywordMapping(mapped_names)
|
||||
if " " not in keywordexpr:
|
||||
# special case to allow for simple "-k pass" and "-k 1.3"
|
||||
return mapping[keywordexpr]
|
||||
elif keywordexpr.startswith("not ") and " " not in keywordexpr[4:]:
|
||||
return not mapping[keywordexpr[4:]]
|
||||
for kwd in keywordexpr.split():
|
||||
if keyword.iskeyword(kwd) and kwd not in python_keywords_allowed_list:
|
||||
raise UsageError("Python keyword '{}' not accepted in expressions passed to '-k'".format(kwd))
|
||||
try:
|
||||
return eval(keywordexpr, {}, mapping)
|
||||
except SyntaxError:
|
||||
raise UsageError("Wrong expression passed to '-k': {}".format(keywordexpr))
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
config._old_mark_config = MARK_GEN._config
|
||||
if config.option.strict:
|
||||
MARK_GEN._config = config
|
||||
|
||||
empty_parameterset = config.getini(EMPTY_PARAMETERSET_OPTION)
|
||||
|
||||
if empty_parameterset not in ('skip', 'xfail', None, ''):
|
||||
raise UsageError(
|
||||
"{!s} must be one of skip and xfail,"
|
||||
" but it is {!r}".format(EMPTY_PARAMETERSET_OPTION, empty_parameterset))
|
||||
|
||||
|
||||
def pytest_unconfigure(config):
|
||||
MARK_GEN._config = getattr(config, '_old_mark_config', None)
|
||||
|
||||
|
||||
class MarkGenerator(object):
|
||||
""" Factory for :class:`MarkDecorator` objects - exposed as
|
||||
a ``pytest.mark`` singleton instance. Example::
|
||||
|
||||
import pytest
|
||||
@pytest.mark.slowtest
|
||||
def test_function():
|
||||
pass
|
||||
|
||||
will set a 'slowtest' :class:`MarkInfo` object
|
||||
on the ``test_function`` object. """
|
||||
_config = None
|
||||
|
||||
def __getattr__(self, name):
|
||||
if name[0] == "_":
|
||||
raise AttributeError("Marker name must NOT start with underscore")
|
||||
if self._config is not None:
|
||||
self._check(name)
|
||||
return MarkDecorator(Mark(name, (), {}))
|
||||
|
||||
def _check(self, name):
|
||||
try:
|
||||
if name in self._markers:
|
||||
return
|
||||
except AttributeError:
|
||||
pass
|
||||
self._markers = values = set()
|
||||
for line in self._config.getini("markers"):
|
||||
marker = line.split(":", 1)[0]
|
||||
marker = marker.rstrip()
|
||||
x = marker.split("(", 1)[0]
|
||||
values.add(x)
|
||||
if name not in self._markers:
|
||||
raise AttributeError("%r not a registered marker" % (name,))
|
||||
|
||||
|
||||
def istestfunc(func):
|
||||
return hasattr(func, "__call__") and \
|
||||
getattr(func, "__name__", "<lambda>") != "<lambda>"
|
||||
|
||||
|
||||
@attr.s(frozen=True)
|
||||
class Mark(object):
|
||||
name = attr.ib()
|
||||
|
@ -491,6 +239,33 @@ def store_legacy_markinfo(func, mark):
|
|||
holder.add_mark(mark)
|
||||
|
||||
|
||||
def transfer_markers(funcobj, cls, mod):
|
||||
"""
|
||||
this function transfers class level markers and module level markers
|
||||
into function level markinfo objects
|
||||
|
||||
this is the main reason why marks are so broken
|
||||
the resolution will involve phasing out function level MarkInfo objects
|
||||
|
||||
"""
|
||||
for obj in (cls, mod):
|
||||
for mark in get_unpacked_marks(obj):
|
||||
if not _marked(funcobj, mark):
|
||||
store_legacy_markinfo(funcobj, mark)
|
||||
|
||||
|
||||
def _marked(func, mark):
|
||||
""" Returns True if :func: is already marked with :mark:, False otherwise.
|
||||
This can happen if marker is applied to class and the test file is
|
||||
invoked more than once.
|
||||
"""
|
||||
try:
|
||||
func_mark = getattr(func, mark.name)
|
||||
except AttributeError:
|
||||
return False
|
||||
return mark.args == func_mark.args and mark.kwargs == func_mark.kwargs
|
||||
|
||||
|
||||
class MarkInfo(object):
|
||||
""" Marking object created by :class:`MarkDecorator` instances. """
|
||||
|
||||
|
@ -516,31 +291,77 @@ class MarkInfo(object):
|
|||
return map(MarkInfo, self._marks)
|
||||
|
||||
|
||||
class MarkGenerator(object):
|
||||
""" Factory for :class:`MarkDecorator` objects - exposed as
|
||||
a ``pytest.mark`` singleton instance. Example::
|
||||
|
||||
import pytest
|
||||
@pytest.mark.slowtest
|
||||
def test_function():
|
||||
pass
|
||||
|
||||
will set a 'slowtest' :class:`MarkInfo` object
|
||||
on the ``test_function`` object. """
|
||||
_config = None
|
||||
|
||||
def __getattr__(self, name):
|
||||
if name[0] == "_":
|
||||
raise AttributeError("Marker name must NOT start with underscore")
|
||||
if self._config is not None:
|
||||
self._check(name)
|
||||
return MarkDecorator(Mark(name, (), {}))
|
||||
|
||||
def _check(self, name):
|
||||
try:
|
||||
if name in self._markers:
|
||||
return
|
||||
except AttributeError:
|
||||
pass
|
||||
self._markers = values = set()
|
||||
for line in self._config.getini("markers"):
|
||||
marker = line.split(":", 1)[0]
|
||||
marker = marker.rstrip()
|
||||
x = marker.split("(", 1)[0]
|
||||
values.add(x)
|
||||
if name not in self._markers:
|
||||
raise AttributeError("%r not a registered marker" % (name,))
|
||||
|
||||
|
||||
MARK_GEN = MarkGenerator()
|
||||
|
||||
|
||||
def _marked(func, mark):
|
||||
""" Returns True if :func: is already marked with :mark:, False otherwise.
|
||||
This can happen if marker is applied to class and the test file is
|
||||
invoked more than once.
|
||||
"""
|
||||
try:
|
||||
func_mark = getattr(func, mark.name)
|
||||
except AttributeError:
|
||||
return False
|
||||
return mark.args == func_mark.args and mark.kwargs == func_mark.kwargs
|
||||
class NodeKeywords(MappingMixin):
|
||||
def __init__(self, node):
|
||||
self.node = node
|
||||
self.parent = node.parent
|
||||
self._markers = {node.name: True}
|
||||
|
||||
def __getitem__(self, key):
|
||||
try:
|
||||
return self._markers[key]
|
||||
except KeyError:
|
||||
if self.parent is None:
|
||||
raise
|
||||
return self.parent.keywords[key]
|
||||
|
||||
def transfer_markers(funcobj, cls, mod):
|
||||
"""
|
||||
this function transfers class level markers and module level markers
|
||||
into function level markinfo objects
|
||||
def __setitem__(self, key, value):
|
||||
self._markers[key] = value
|
||||
|
||||
this is the main reason why marks are so broken
|
||||
the resolution will involve phasing out function level MarkInfo objects
|
||||
def __delitem__(self, key):
|
||||
raise ValueError("cannot delete key in keywords dict")
|
||||
|
||||
"""
|
||||
for obj in (cls, mod):
|
||||
for mark in get_unpacked_marks(obj):
|
||||
if not _marked(funcobj, mark):
|
||||
store_legacy_markinfo(funcobj, mark)
|
||||
def __iter__(self):
|
||||
seen = self._seen()
|
||||
return iter(seen)
|
||||
|
||||
def _seen(self):
|
||||
seen = set(self._markers)
|
||||
if self.parent is not None:
|
||||
seen.update(self.parent.keywords)
|
||||
return seen
|
||||
|
||||
def __len__(self):
|
||||
return len(self._seen())
|
||||
|
||||
def __repr__(self):
|
||||
return "<NodeKeywords for node %s>" % (self.node, )
|
|
@ -1,5 +1,4 @@
|
|||
from __future__ import absolute_import, division, print_function
|
||||
from collections import MutableMapping as MappingMixin
|
||||
import os
|
||||
|
||||
import six
|
||||
|
@ -7,7 +6,9 @@ import py
|
|||
import attr
|
||||
|
||||
import _pytest
|
||||
import _pytest._code
|
||||
|
||||
from _pytest.mark.structures import NodeKeywords
|
||||
|
||||
SEP = "/"
|
||||
|
||||
|
@ -66,47 +67,11 @@ class _CompatProperty(object):
|
|||
return getattr(__import__('pytest'), self.name)
|
||||
|
||||
|
||||
class NodeKeywords(MappingMixin):
|
||||
def __init__(self, node):
|
||||
self.node = node
|
||||
self.parent = node.parent
|
||||
self._markers = {node.name: True}
|
||||
|
||||
def __getitem__(self, key):
|
||||
try:
|
||||
return self._markers[key]
|
||||
except KeyError:
|
||||
if self.parent is None:
|
||||
raise
|
||||
return self.parent.keywords[key]
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
self._markers[key] = value
|
||||
|
||||
def __delitem__(self, key):
|
||||
raise ValueError("cannot delete key in keywords dict")
|
||||
|
||||
def __iter__(self):
|
||||
seen = set(self._markers)
|
||||
if self.parent is not None:
|
||||
seen.update(self.parent.keywords)
|
||||
return iter(seen)
|
||||
|
||||
def __len__(self):
|
||||
return len(self.__iter__())
|
||||
|
||||
def keys(self):
|
||||
return list(self)
|
||||
|
||||
def __repr__(self):
|
||||
return "<NodeKeywords for node %s>" % (self.node, )
|
||||
|
||||
|
||||
class Node(object):
|
||||
""" base class for Collector and Item the test collection tree.
|
||||
Collector subclasses have children, Items are terminal nodes."""
|
||||
|
||||
def __init__(self, name, parent=None, config=None, session=None):
|
||||
def __init__(self, name, parent=None, config=None, session=None, fspath=None, nodeid=None):
|
||||
#: a unique name within the scope of the parent node
|
||||
self.name = name
|
||||
|
||||
|
@ -120,7 +85,7 @@ class Node(object):
|
|||
self.session = session or parent.session
|
||||
|
||||
#: filesystem path where this node was collected from (can be None)
|
||||
self.fspath = getattr(parent, 'fspath', None)
|
||||
self.fspath = fspath or getattr(parent, 'fspath', None)
|
||||
|
||||
#: keywords/markers collected from all scopes
|
||||
self.keywords = NodeKeywords(self)
|
||||
|
@ -131,6 +96,12 @@ class Node(object):
|
|||
# used for storing artificial fixturedefs for direct parametrization
|
||||
self._name2pseudofixturedef = {}
|
||||
|
||||
if nodeid is not None:
|
||||
self._nodeid = nodeid
|
||||
else:
|
||||
assert parent is not None
|
||||
self._nodeid = self.parent.nodeid + "::" + self.name
|
||||
|
||||
@property
|
||||
def ihook(self):
|
||||
""" fspath sensitive hook proxy used to call pytest hooks"""
|
||||
|
@ -174,14 +145,7 @@ class Node(object):
|
|||
@property
|
||||
def nodeid(self):
|
||||
""" a ::-separated string denoting its collection tree address. """
|
||||
try:
|
||||
return self._nodeid
|
||||
except AttributeError:
|
||||
self._nodeid = x = self._makeid()
|
||||
return x
|
||||
|
||||
def _makeid(self):
|
||||
return self.parent.nodeid + "::" + self.name
|
||||
return self._nodeid
|
||||
|
||||
def __hash__(self):
|
||||
return hash(self.nodeid)
|
||||
|
@ -227,7 +191,6 @@ class Node(object):
|
|||
def listextrakeywords(self):
|
||||
""" Return a set of all extra keywords in self and any parents."""
|
||||
extra_keywords = set()
|
||||
item = self
|
||||
for item in self.listchain():
|
||||
extra_keywords.update(item.extra_keyword_matches)
|
||||
return extra_keywords
|
||||
|
@ -319,8 +282,14 @@ class Collector(Node):
|
|||
excinfo.traceback = ntraceback.filter()
|
||||
|
||||
|
||||
def _check_initialpaths_for_relpath(session, fspath):
|
||||
for initial_path in session._initialpaths:
|
||||
if fspath.common(initial_path) == initial_path:
|
||||
return fspath.relto(initial_path.dirname)
|
||||
|
||||
|
||||
class FSCollector(Collector):
|
||||
def __init__(self, fspath, parent=None, config=None, session=None):
|
||||
def __init__(self, fspath, parent=None, config=None, session=None, nodeid=None):
|
||||
fspath = py.path.local(fspath) # xxx only for test_resultlog.py?
|
||||
name = fspath.basename
|
||||
if parent is not None:
|
||||
|
@ -328,22 +297,19 @@ class FSCollector(Collector):
|
|||
if rel:
|
||||
name = rel
|
||||
name = name.replace(os.sep, SEP)
|
||||
super(FSCollector, self).__init__(name, parent, config, session)
|
||||
self.fspath = fspath
|
||||
|
||||
def _check_initialpaths_for_relpath(self):
|
||||
for initialpath in self.session._initialpaths:
|
||||
if self.fspath.common(initialpath) == initialpath:
|
||||
return self.fspath.relto(initialpath.dirname)
|
||||
session = session or parent.session
|
||||
|
||||
def _makeid(self):
|
||||
relpath = self.fspath.relto(self.config.rootdir)
|
||||
if nodeid is None:
|
||||
nodeid = self.fspath.relto(session.config.rootdir)
|
||||
|
||||
if not relpath:
|
||||
relpath = self._check_initialpaths_for_relpath()
|
||||
if os.sep != SEP:
|
||||
relpath = relpath.replace(os.sep, SEP)
|
||||
return relpath
|
||||
if not nodeid:
|
||||
nodeid = _check_initialpaths_for_relpath(session, fspath)
|
||||
if os.sep != SEP:
|
||||
nodeid = nodeid.replace(os.sep, SEP)
|
||||
|
||||
super(FSCollector, self).__init__(name, parent, config, session, nodeid=nodeid, fspath=fspath)
|
||||
|
||||
|
||||
class File(FSCollector):
|
||||
|
@ -356,10 +322,14 @@ class Item(Node):
|
|||
"""
|
||||
nextitem = None
|
||||
|
||||
def __init__(self, name, parent=None, config=None, session=None):
|
||||
super(Item, self).__init__(name, parent, config, session)
|
||||
def __init__(self, name, parent=None, config=None, session=None, nodeid=None):
|
||||
super(Item, self).__init__(name, parent, config, session, nodeid=nodeid)
|
||||
self._report_sections = []
|
||||
|
||||
#: user properties is a list of tuples (name, value) that holds user
|
||||
#: defined properties for this test.
|
||||
self.user_properties = []
|
||||
|
||||
def add_report_section(self, when, key, content):
|
||||
"""
|
||||
Adds a new report section, similar to what's done internally to add stdout and
|
||||
|
|
|
@ -28,7 +28,7 @@ from _pytest.compat import (
|
|||
safe_str, getlocation, enum,
|
||||
)
|
||||
from _pytest.outcomes import fail
|
||||
from _pytest.mark import transfer_markers
|
||||
from _pytest.mark.structures import transfer_markers
|
||||
|
||||
|
||||
# relative paths that we use to filter traceback entries from appearing to the user;
|
||||
|
|
|
@ -2,7 +2,8 @@ import math
|
|||
import sys
|
||||
|
||||
import py
|
||||
from six.moves import zip
|
||||
from six.moves import zip, filterfalse
|
||||
from more_itertools.more import always_iterable
|
||||
|
||||
from _pytest.compat import isclass
|
||||
from _pytest.outcomes import fail
|
||||
|
@ -30,6 +31,10 @@ class ApproxBase(object):
|
|||
or sequences of numbers.
|
||||
"""
|
||||
|
||||
# Tell numpy to use our `__eq__` operator instead of its
|
||||
__array_ufunc__ = None
|
||||
__array_priority__ = 100
|
||||
|
||||
def __init__(self, expected, rel=None, abs=None, nan_ok=False):
|
||||
self.expected = expected
|
||||
self.abs = abs
|
||||
|
@ -68,14 +73,13 @@ class ApproxNumpy(ApproxBase):
|
|||
Perform approximate comparisons for numpy arrays.
|
||||
"""
|
||||
|
||||
# Tell numpy to use our `__eq__` operator instead of its.
|
||||
__array_priority__ = 100
|
||||
|
||||
def __repr__(self):
|
||||
# It might be nice to rewrite this function to account for the
|
||||
# shape of the array...
|
||||
import numpy as np
|
||||
|
||||
return "approx({0!r})".format(list(
|
||||
self._approx_scalar(x) for x in self.expected))
|
||||
self._approx_scalar(x) for x in np.asarray(self.expected)))
|
||||
|
||||
if sys.version_info[0] == 2:
|
||||
__cmp__ = _cmp_raises_type_error
|
||||
|
@ -83,12 +87,15 @@ class ApproxNumpy(ApproxBase):
|
|||
def __eq__(self, actual):
|
||||
import numpy as np
|
||||
|
||||
try:
|
||||
actual = np.asarray(actual)
|
||||
except: # noqa
|
||||
raise TypeError("cannot compare '{0}' to numpy.ndarray".format(actual))
|
||||
# self.expected is supposed to always be an array here
|
||||
|
||||
if actual.shape != self.expected.shape:
|
||||
if not np.isscalar(actual):
|
||||
try:
|
||||
actual = np.asarray(actual)
|
||||
except: # noqa
|
||||
raise TypeError("cannot compare '{0}' to numpy.ndarray".format(actual))
|
||||
|
||||
if not np.isscalar(actual) and actual.shape != self.expected.shape:
|
||||
return False
|
||||
|
||||
return ApproxBase.__eq__(self, actual)
|
||||
|
@ -96,11 +103,16 @@ class ApproxNumpy(ApproxBase):
|
|||
def _yield_comparisons(self, actual):
|
||||
import numpy as np
|
||||
|
||||
# We can be sure that `actual` is a numpy array, because it's
|
||||
# casted in `__eq__` before being passed to `ApproxBase.__eq__`,
|
||||
# which is the only method that calls this one.
|
||||
for i in np.ndindex(self.expected.shape):
|
||||
yield actual[i], self.expected[i]
|
||||
# `actual` can either be a numpy array or a scalar, it is treated in
|
||||
# `__eq__` before being passed to `ApproxBase.__eq__`, which is the
|
||||
# only method that calls this one.
|
||||
|
||||
if np.isscalar(actual):
|
||||
for i in np.ndindex(self.expected.shape):
|
||||
yield actual, np.asscalar(self.expected[i])
|
||||
else:
|
||||
for i in np.ndindex(self.expected.shape):
|
||||
yield np.asscalar(actual[i]), np.asscalar(self.expected[i])
|
||||
|
||||
|
||||
class ApproxMapping(ApproxBase):
|
||||
|
@ -130,9 +142,6 @@ class ApproxSequence(ApproxBase):
|
|||
Perform approximate comparisons for sequences of numbers.
|
||||
"""
|
||||
|
||||
# Tell numpy to use our `__eq__` operator instead of its.
|
||||
__array_priority__ = 100
|
||||
|
||||
def __repr__(self):
|
||||
seq_type = type(self.expected)
|
||||
if seq_type not in (tuple, list, set):
|
||||
|
@ -188,6 +197,8 @@ class ApproxScalar(ApproxBase):
|
|||
Return true if the given value is equal to the expected value within
|
||||
the pre-specified tolerance.
|
||||
"""
|
||||
if _is_numpy_array(actual):
|
||||
return ApproxNumpy(actual, self.abs, self.rel, self.nan_ok) == self.expected
|
||||
|
||||
# Short-circuit exact equality.
|
||||
if actual == self.expected:
|
||||
|
@ -307,12 +318,18 @@ def approx(expected, rel=None, abs=None, nan_ok=False):
|
|||
>>> {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == approx({'a': 0.3, 'b': 0.6})
|
||||
True
|
||||
|
||||
And ``numpy`` arrays::
|
||||
``numpy`` arrays::
|
||||
|
||||
>>> import numpy as np # doctest: +SKIP
|
||||
>>> np.array([0.1, 0.2]) + np.array([0.2, 0.4]) == approx(np.array([0.3, 0.6])) # doctest: +SKIP
|
||||
True
|
||||
|
||||
And for a ``numpy`` array against a scalar::
|
||||
|
||||
>>> import numpy as np # doctest: +SKIP
|
||||
>>> np.array([0.1, 0.2]) + np.array([0.2, 0.1]) == approx(0.3) # doctest: +SKIP
|
||||
True
|
||||
|
||||
By default, ``approx`` considers numbers within a relative tolerance of
|
||||
``1e-6`` (i.e. one part in a million) of its expected value to be equal.
|
||||
This treatment would lead to surprising results if the expected value was
|
||||
|
@ -567,14 +584,10 @@ def raises(expected_exception, *args, **kwargs):
|
|||
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
msg = ("exceptions must be old-style classes or"
|
||||
" derived from BaseException, not %s")
|
||||
if isinstance(expected_exception, tuple):
|
||||
for exc in expected_exception:
|
||||
if not isclass(exc):
|
||||
raise TypeError(msg % type(exc))
|
||||
elif not isclass(expected_exception):
|
||||
raise TypeError(msg % type(expected_exception))
|
||||
for exc in filterfalse(isclass, always_iterable(expected_exception)):
|
||||
msg = ("exceptions must be old-style classes or"
|
||||
" derived from BaseException, not %s")
|
||||
raise TypeError(msg % type(exc))
|
||||
|
||||
message = "DID NOT RAISE {0}".format(expected_exception)
|
||||
match_expr = None
|
||||
|
|
|
@ -256,6 +256,14 @@ class BaseReport(object):
|
|||
exc = tw.stringio.getvalue()
|
||||
return exc.strip()
|
||||
|
||||
@property
|
||||
def caplog(self):
|
||||
"""Return captured log lines, if log capturing is enabled
|
||||
|
||||
.. versionadded:: 3.5
|
||||
"""
|
||||
return '\n'.join(content for (prefix, content) in self.get_sections('Captured log'))
|
||||
|
||||
@property
|
||||
def capstdout(self):
|
||||
"""Return captured text from stdout, if capturing is enabled
|
||||
|
@ -309,7 +317,7 @@ def pytest_runtest_makereport(item, call):
|
|||
sections.append(("Captured %s %s" % (key, rwhen), content))
|
||||
return TestReport(item.nodeid, item.location,
|
||||
keywords, outcome, longrepr, when,
|
||||
sections, duration)
|
||||
sections, duration, user_properties=item.user_properties)
|
||||
|
||||
|
||||
class TestReport(BaseReport):
|
||||
|
@ -318,7 +326,7 @@ class TestReport(BaseReport):
|
|||
"""
|
||||
|
||||
def __init__(self, nodeid, location, keywords, outcome,
|
||||
longrepr, when, sections=(), duration=0, **extra):
|
||||
longrepr, when, sections=(), duration=0, user_properties=(), **extra):
|
||||
#: normalized collection node id
|
||||
self.nodeid = nodeid
|
||||
|
||||
|
@ -340,6 +348,10 @@ class TestReport(BaseReport):
|
|||
#: one of 'setup', 'call', 'teardown' to indicate runtest phase.
|
||||
self.when = when
|
||||
|
||||
#: user properties is a list of tuples (name, value) that holds user
|
||||
#: defined properties of the test
|
||||
self.user_properties = user_properties
|
||||
|
||||
#: list of pairs ``(str, str)`` of extra information which needs to
|
||||
#: marshallable. Used by pytest to add captured text
|
||||
#: from ``stdout`` and ``stderr``, but may be used by other plugins
|
||||
|
|
|
@ -1,14 +1,10 @@
|
|||
""" support for skip/xfail functions and markers. """
|
||||
from __future__ import absolute_import, division, print_function
|
||||
|
||||
import os
|
||||
import six
|
||||
import sys
|
||||
import traceback
|
||||
|
||||
from _pytest.config import hookimpl
|
||||
from _pytest.mark import MarkInfo, MarkDecorator
|
||||
from _pytest.outcomes import fail, skip, xfail, TEST_OUTCOME
|
||||
from _pytest.mark.evaluate import MarkEvaluator
|
||||
from _pytest.outcomes import fail, skip, xfail
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
|
@ -17,11 +13,11 @@ def pytest_addoption(parser):
|
|||
action="store_true", dest="runxfail", default=False,
|
||||
help="run tests even if they are marked xfail")
|
||||
|
||||
parser.addini("xfail_strict", "default for the strict parameter of xfail "
|
||||
"markers when not given explicitly (default: "
|
||||
"False)",
|
||||
default=False,
|
||||
type="bool")
|
||||
parser.addini("xfail_strict",
|
||||
"default for the strict parameter of xfail "
|
||||
"markers when not given explicitly (default: False)",
|
||||
default=False,
|
||||
type="bool")
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
|
@ -60,112 +56,6 @@ def pytest_configure(config):
|
|||
)
|
||||
|
||||
|
||||
class MarkEvaluator(object):
|
||||
def __init__(self, item, name):
|
||||
self.item = item
|
||||
self._marks = None
|
||||
self._mark = None
|
||||
self._mark_name = name
|
||||
|
||||
def __bool__(self):
|
||||
self._marks = self._get_marks()
|
||||
return bool(self._marks)
|
||||
__nonzero__ = __bool__
|
||||
|
||||
def wasvalid(self):
|
||||
return not hasattr(self, 'exc')
|
||||
|
||||
def _get_marks(self):
|
||||
|
||||
keyword = self.item.keywords.get(self._mark_name)
|
||||
if isinstance(keyword, MarkDecorator):
|
||||
return [keyword.mark]
|
||||
elif isinstance(keyword, MarkInfo):
|
||||
return [x.combined for x in keyword]
|
||||
else:
|
||||
return []
|
||||
|
||||
def invalidraise(self, exc):
|
||||
raises = self.get('raises')
|
||||
if not raises:
|
||||
return
|
||||
return not isinstance(exc, raises)
|
||||
|
||||
def istrue(self):
|
||||
try:
|
||||
return self._istrue()
|
||||
except TEST_OUTCOME:
|
||||
self.exc = sys.exc_info()
|
||||
if isinstance(self.exc[1], SyntaxError):
|
||||
msg = [" " * (self.exc[1].offset + 4) + "^", ]
|
||||
msg.append("SyntaxError: invalid syntax")
|
||||
else:
|
||||
msg = traceback.format_exception_only(*self.exc[:2])
|
||||
fail("Error evaluating %r expression\n"
|
||||
" %s\n"
|
||||
"%s"
|
||||
% (self._mark_name, self.expr, "\n".join(msg)),
|
||||
pytrace=False)
|
||||
|
||||
def _getglobals(self):
|
||||
d = {'os': os, 'sys': sys, 'config': self.item.config}
|
||||
if hasattr(self.item, 'obj'):
|
||||
d.update(self.item.obj.__globals__)
|
||||
return d
|
||||
|
||||
def _istrue(self):
|
||||
if hasattr(self, 'result'):
|
||||
return self.result
|
||||
self._marks = self._get_marks()
|
||||
|
||||
if self._marks:
|
||||
self.result = False
|
||||
for mark in self._marks:
|
||||
self._mark = mark
|
||||
if 'condition' in mark.kwargs:
|
||||
args = (mark.kwargs['condition'],)
|
||||
else:
|
||||
args = mark.args
|
||||
|
||||
for expr in args:
|
||||
self.expr = expr
|
||||
if isinstance(expr, six.string_types):
|
||||
d = self._getglobals()
|
||||
result = cached_eval(self.item.config, expr, d)
|
||||
else:
|
||||
if "reason" not in mark.kwargs:
|
||||
# XXX better be checked at collection time
|
||||
msg = "you need to specify reason=STRING " \
|
||||
"when using booleans as conditions."
|
||||
fail(msg)
|
||||
result = bool(expr)
|
||||
if result:
|
||||
self.result = True
|
||||
self.reason = mark.kwargs.get('reason', None)
|
||||
self.expr = expr
|
||||
return self.result
|
||||
|
||||
if not args:
|
||||
self.result = True
|
||||
self.reason = mark.kwargs.get('reason', None)
|
||||
return self.result
|
||||
return False
|
||||
|
||||
def get(self, attr, default=None):
|
||||
if self._mark is None:
|
||||
return default
|
||||
return self._mark.kwargs.get(attr, default)
|
||||
|
||||
def getexplanation(self):
|
||||
expl = getattr(self, 'reason', None) or self.get('reason', None)
|
||||
if not expl:
|
||||
if not hasattr(self, 'expr'):
|
||||
return ""
|
||||
else:
|
||||
return "condition: " + str(self.expr)
|
||||
return expl
|
||||
|
||||
|
||||
@hookimpl(tryfirst=True)
|
||||
def pytest_runtest_setup(item):
|
||||
# Check if skip or skipif are specified as pytest marks
|
||||
|
@ -239,7 +129,7 @@ def pytest_runtest_makereport(item, call):
|
|||
rep.outcome = "passed"
|
||||
rep.wasxfail = rep.longrepr
|
||||
elif item.config.option.runxfail:
|
||||
pass # don't interefere
|
||||
pass # don't interefere
|
||||
elif call.excinfo and call.excinfo.errisinstance(xfail.Exception):
|
||||
rep.wasxfail = "reason: " + call.excinfo.value.msg
|
||||
rep.outcome = "skipped"
|
||||
|
@ -269,6 +159,7 @@ def pytest_runtest_makereport(item, call):
|
|||
filename, line = item.location[:2]
|
||||
rep.longrepr = filename, line, reason
|
||||
|
||||
|
||||
# called by terminalreporter progress reporting
|
||||
|
||||
|
||||
|
@ -279,6 +170,7 @@ def pytest_report_teststatus(report):
|
|||
elif report.passed:
|
||||
return "xpassed", "X", ("XPASS", {'yellow': True})
|
||||
|
||||
|
||||
# called by the terminalreporter instance/plugin
|
||||
|
||||
|
||||
|
@ -294,18 +186,8 @@ def pytest_terminal_summary(terminalreporter):
|
|||
|
||||
lines = []
|
||||
for char in tr.reportchars:
|
||||
if char == "x":
|
||||
show_xfailed(terminalreporter, lines)
|
||||
elif char == "X":
|
||||
show_xpassed(terminalreporter, lines)
|
||||
elif char in "fF":
|
||||
show_simple(terminalreporter, lines, 'failed', "FAIL %s")
|
||||
elif char in "sS":
|
||||
show_skipped(terminalreporter, lines)
|
||||
elif char == "E":
|
||||
show_simple(terminalreporter, lines, 'error', "ERROR %s")
|
||||
elif char == 'p':
|
||||
show_simple(terminalreporter, lines, 'passed', "PASSED %s")
|
||||
action = REPORTCHAR_ACTIONS.get(char, lambda tr, lines: None)
|
||||
action(terminalreporter, lines)
|
||||
|
||||
if lines:
|
||||
tr._tw.sep("=", "short test summary info")
|
||||
|
@ -341,18 +223,6 @@ def show_xpassed(terminalreporter, lines):
|
|||
lines.append("XPASS %s %s" % (pos, reason))
|
||||
|
||||
|
||||
def cached_eval(config, expr, d):
|
||||
if not hasattr(config, '_evalcache'):
|
||||
config._evalcache = {}
|
||||
try:
|
||||
return config._evalcache[expr]
|
||||
except KeyError:
|
||||
import _pytest._code
|
||||
exprcode = _pytest._code.compile(expr, mode="eval")
|
||||
config._evalcache[expr] = x = eval(exprcode, d)
|
||||
return x
|
||||
|
||||
|
||||
def folded_skips(skipped):
|
||||
d = {}
|
||||
for event in skipped:
|
||||
|
@ -364,7 +234,7 @@ def folded_skips(skipped):
|
|||
# TODO: revisit after marks scope would be fixed
|
||||
when = getattr(event, 'when', None)
|
||||
if when == 'setup' and 'skip' in keywords and 'pytestmark' not in keywords:
|
||||
key = (key[0], None, key[2], )
|
||||
key = (key[0], None, key[2])
|
||||
d.setdefault(key, []).append(event)
|
||||
values = []
|
||||
for key, events in d.items():
|
||||
|
@ -395,3 +265,23 @@ def show_skipped(terminalreporter, lines):
|
|||
lines.append(
|
||||
"SKIP [%d] %s: %s" %
|
||||
(num, fspath, reason))
|
||||
|
||||
|
||||
def shower(stat, format):
|
||||
def show_(terminalreporter, lines):
|
||||
return show_simple(terminalreporter, lines, stat, format)
|
||||
|
||||
return show_
|
||||
|
||||
|
||||
REPORTCHAR_ACTIONS = {
|
||||
'x': show_xfailed,
|
||||
'X': show_xpassed,
|
||||
'f': shower('failed', "FAIL %s"),
|
||||
'F': shower('failed', "FAIL %s"),
|
||||
's': show_skipped,
|
||||
'S': show_skipped,
|
||||
'p': shower('passed', "PASSED %s"),
|
||||
'E': shower('error', "ERROR %s")
|
||||
|
||||
}
|
||||
|
|
|
@ -12,6 +12,7 @@ import time
|
|||
import pluggy
|
||||
import py
|
||||
import six
|
||||
from more_itertools import collapse
|
||||
|
||||
import pytest
|
||||
from _pytest import nodes
|
||||
|
@ -19,12 +20,45 @@ from _pytest.main import EXIT_OK, EXIT_TESTSFAILED, EXIT_INTERRUPTED, \
|
|||
EXIT_USAGEERROR, EXIT_NOTESTSCOLLECTED
|
||||
|
||||
|
||||
import argparse
|
||||
|
||||
|
||||
class MoreQuietAction(argparse.Action):
|
||||
"""
|
||||
a modified copy of the argparse count action which counts down and updates
|
||||
the legacy quiet attribute at the same time
|
||||
|
||||
used to unify verbosity handling
|
||||
"""
|
||||
def __init__(self,
|
||||
option_strings,
|
||||
dest,
|
||||
default=None,
|
||||
required=False,
|
||||
help=None):
|
||||
super(MoreQuietAction, self).__init__(
|
||||
option_strings=option_strings,
|
||||
dest=dest,
|
||||
nargs=0,
|
||||
default=default,
|
||||
required=required,
|
||||
help=help)
|
||||
|
||||
def __call__(self, parser, namespace, values, option_string=None):
|
||||
new_count = getattr(namespace, self.dest, 0) - 1
|
||||
setattr(namespace, self.dest, new_count)
|
||||
# todo Deprecate config.quiet
|
||||
namespace.quiet = getattr(namespace, 'quiet', 0) + 1
|
||||
|
||||
|
||||
def pytest_addoption(parser):
|
||||
group = parser.getgroup("terminal reporting", "reporting", after="general")
|
||||
group._addoption('-v', '--verbose', action="count",
|
||||
dest="verbose", default=0, help="increase verbosity."),
|
||||
group._addoption('-q', '--quiet', action="count",
|
||||
dest="quiet", default=0, help="decrease verbosity."),
|
||||
group._addoption('-v', '--verbose', action="count", default=0,
|
||||
dest="verbose", help="increase verbosity."),
|
||||
group._addoption('-q', '--quiet', action=MoreQuietAction, default=0,
|
||||
dest="verbose", help="decrease verbosity."),
|
||||
group._addoption("--verbosity", dest='verbose', type=int, default=0,
|
||||
help="set verbosity")
|
||||
group._addoption('-r',
|
||||
action="store", dest="reportchars", default='', metavar="chars",
|
||||
help="show extra test summary info as specified by chars (f)ailed, "
|
||||
|
@ -42,6 +76,11 @@ def pytest_addoption(parser):
|
|||
action="store", dest="tbstyle", default='auto',
|
||||
choices=['auto', 'long', 'short', 'no', 'line', 'native'],
|
||||
help="traceback print mode (auto/long/short/line/native/no).")
|
||||
group._addoption('--show-capture',
|
||||
action="store", dest="showcapture",
|
||||
choices=['no', 'stdout', 'stderr', 'log', 'all'], default='all',
|
||||
help="Controls how captured stdout/stderr/log is shown on failed tests. "
|
||||
"Default is 'all'.")
|
||||
group._addoption('--fulltrace', '--full-trace',
|
||||
action="store_true", default=False,
|
||||
help="don't cut any tracebacks (default is to cut).")
|
||||
|
@ -56,7 +95,6 @@ def pytest_addoption(parser):
|
|||
|
||||
|
||||
def pytest_configure(config):
|
||||
config.option.verbose -= config.option.quiet
|
||||
reporter = TerminalReporter(config, sys.stdout)
|
||||
config.pluginmanager.register(reporter, 'terminalreporter')
|
||||
if config.option.debug or config.option.traceconfig:
|
||||
|
@ -358,6 +396,7 @@ class TerminalReporter(object):
|
|||
|
||||
errors = len(self.stats.get('error', []))
|
||||
skipped = len(self.stats.get('skipped', []))
|
||||
deselected = len(self.stats.get('deselected', []))
|
||||
if final:
|
||||
line = "collected "
|
||||
else:
|
||||
|
@ -365,6 +404,8 @@ class TerminalReporter(object):
|
|||
line += str(self._numcollected) + " item" + ('' if self._numcollected == 1 else 's')
|
||||
if errors:
|
||||
line += " / %d errors" % errors
|
||||
if deselected:
|
||||
line += " / %d deselected" % deselected
|
||||
if skipped:
|
||||
line += " / %d skipped" % skipped
|
||||
if self.isatty:
|
||||
|
@ -374,6 +415,7 @@ class TerminalReporter(object):
|
|||
else:
|
||||
self.write_line(line)
|
||||
|
||||
@pytest.hookimpl(trylast=True)
|
||||
def pytest_collection_modifyitems(self):
|
||||
self.report_collect(True)
|
||||
|
||||
|
@ -401,7 +443,7 @@ class TerminalReporter(object):
|
|||
|
||||
def _write_report_lines_from_hooks(self, lines):
|
||||
lines.reverse()
|
||||
for line in flatten(lines):
|
||||
for line in collapse(lines):
|
||||
self.write_line(line)
|
||||
|
||||
def pytest_report_header(self, config):
|
||||
|
@ -474,16 +516,19 @@ class TerminalReporter(object):
|
|||
if exitstatus in summary_exit_codes:
|
||||
self.config.hook.pytest_terminal_summary(terminalreporter=self,
|
||||
exitstatus=exitstatus)
|
||||
self.summary_errors()
|
||||
self.summary_failures()
|
||||
self.summary_warnings()
|
||||
self.summary_passes()
|
||||
if exitstatus == EXIT_INTERRUPTED:
|
||||
self._report_keyboardinterrupt()
|
||||
del self._keyboardinterrupt_memo
|
||||
self.summary_deselected()
|
||||
self.summary_stats()
|
||||
|
||||
@pytest.hookimpl(hookwrapper=True)
|
||||
def pytest_terminal_summary(self):
|
||||
self.summary_errors()
|
||||
self.summary_failures()
|
||||
yield
|
||||
self.summary_warnings()
|
||||
self.summary_passes()
|
||||
|
||||
def pytest_keyboard_interrupt(self, excinfo):
|
||||
self._keyboardinterrupt_memo = excinfo.getrepr(funcargs=True)
|
||||
|
||||
|
@ -624,7 +669,12 @@ class TerminalReporter(object):
|
|||
|
||||
def _outrep_summary(self, rep):
|
||||
rep.toterminal(self._tw)
|
||||
showcapture = self.config.option.showcapture
|
||||
if showcapture == 'no':
|
||||
return
|
||||
for secname, content in rep.sections:
|
||||
if showcapture != 'all' and showcapture not in secname:
|
||||
continue
|
||||
self._tw.sep("-", secname)
|
||||
if content[-1:] == "\n":
|
||||
content = content[:-1]
|
||||
|
@ -641,11 +691,6 @@ class TerminalReporter(object):
|
|||
if self.verbosity == -1:
|
||||
self.write_line(msg, **markup)
|
||||
|
||||
def summary_deselected(self):
|
||||
if 'deselected' in self.stats:
|
||||
self.write_sep("=", "%d tests deselected" % (
|
||||
len(self.stats['deselected'])), bold=True)
|
||||
|
||||
|
||||
def repr_pythonversion(v=None):
|
||||
if v is None:
|
||||
|
@ -656,15 +701,6 @@ def repr_pythonversion(v=None):
|
|||
return str(v)
|
||||
|
||||
|
||||
def flatten(values):
|
||||
for x in values:
|
||||
if isinstance(x, (list, tuple)):
|
||||
for y in flatten(x):
|
||||
yield y
|
||||
else:
|
||||
yield x
|
||||
|
||||
|
||||
def build_summary_stats_line(stats):
|
||||
keys = ("failed passed skipped deselected "
|
||||
"xfailed xpassed warnings error").split()
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
New ``--show-capture`` command-line option that allows to specify how to display captured output when tests fail: ``no``, ``stdout``, ``stderr``, ``log`` or ``all`` (the default).
|
|
@ -0,0 +1 @@
|
|||
New ``--rootdir`` command-line option to override the rules for discovering the root directory. See `customize <https://docs.pytest.org/en/latest/customize.html>`_ in the documentation for details.
|
|
@ -0,0 +1 @@
|
|||
Fixtures are now instantiated based on their scopes, with higher-scoped fixtures (such as ``session``) being instantiated first than lower-scoped fixtures (such as ``function``). The relative order of fixtures of the same scope is kept unchanged, based in their declaration order and their dependencies.
|
|
@ -0,0 +1,2 @@
|
|||
``record_xml_property`` renamed to ``record_property`` and is now compatible with xdist, markers and any reporter.
|
||||
``record_xml_property`` name is now deprecated.
|
|
@ -0,0 +1 @@
|
|||
``record_xml_property`` fixture is now deprecated in favor of the more generic ``record_property``.
|
|
@ -0,0 +1 @@
|
|||
New ``--nf``, ``--new-first`` options: run new tests first followed by the rest of the tests, in both cases tests are also sorted by the file modified time, with more recent files coming first.
|
|
@ -0,0 +1 @@
|
|||
Defining ``pytest_plugins`` is now deprecated in non-top-level conftest.py files, because they "leak" to the entire directory tree.
|
|
@ -0,0 +1 @@
|
|||
New ``--last-failed-no-failures`` command-line option that allows to specify the behavior of the cache plugin's ```--last-failed`` feature when no tests failed in the last run (or no cache was found): ``none`` or ``all`` (the default).
|
|
@ -0,0 +1 @@
|
|||
New ``--doctest-continue-on-failure`` command-line option to enable doctests to show multiple failures for each snippet, instead of stopping at the first failure.
|
|
@ -0,0 +1 @@
|
|||
Captured log messages are added to the ``<system-out>`` tag in the generated junit xml file if the ``junit_logging`` ini option is set to ``system-out``. If the value of this ini option is ``system-err`, the logs are written to ``<system-err>``. The default value for ``junit_logging`` is ``no``, meaning captured logs are not written to the output file.
|
|
@ -0,0 +1 @@
|
|||
Allow the logging plugin to handle ``pytest_runtest_logstart`` and ``pytest_runtest_logfinish`` hooks when live logs are enabled.
|
|
@ -0,0 +1 @@
|
|||
Passing `--log-cli-level` in the command-line now automatically activates live logging.
|
|
@ -0,0 +1 @@
|
|||
Add command line option ``--deselect`` to allow deselection of individual tests at collection time.
|
|
@ -0,0 +1 @@
|
|||
Captured logs are printed before entering pdb.
|
|
@ -0,0 +1 @@
|
|||
Deselected item count is now shown before tests are run, e.g. ``collected X items / Y deselected``.
|
|
@ -0,0 +1 @@
|
|||
Change minimum requirement of ``attrs`` to ``17.4.0``.
|
|
@ -0,0 +1 @@
|
|||
The builtin module ``platform`` is now available for use in expressions in ``pytest.mark``.
|
|
@ -0,0 +1 @@
|
|||
Remove usage of deprecated ``metafunc.addcall`` in our own tests.
|
|
@ -0,0 +1 @@
|
|||
Internal ``mark.py`` module has been turned into a package.
|
|
@ -0,0 +1 @@
|
|||
The *short test summary info* section now is displayed after tracebacks and warnings in the terminal.
|
|
@ -0,0 +1 @@
|
|||
``pytest`` now depends on the `more_itertools <https://github.com/erikrose/more-itertools>`_ package.
|
|
@ -0,0 +1 @@
|
|||
Added warning when ``[pytest]`` section is used in a ``.cfg`` file passed with ``-c``
|
|
@ -0,0 +1 @@
|
|||
``nodeids`` can now be passed explicitly to ``FSCollector`` and ``Node`` constructors.
|
|
@ -0,0 +1 @@
|
|||
Internal refactoring of ``FormattedExcinfo`` to use ``attrs`` facilities and remove old support code for legacy Python versions.
|
|
@ -0,0 +1 @@
|
|||
New ``--verbosity`` flag to set verbosity level explicitly.
|
|
@ -0,0 +1 @@
|
|||
Refactoring to unify how verbosity is handled internally.
|
|
@ -0,0 +1 @@
|
|||
Internal refactoring to better integrate with argparse.
|
|
@ -0,0 +1 @@
|
|||
``pytest.approx`` now accepts comparing a numpy array with a scalar.
|
|
@ -0,0 +1 @@
|
|||
Remove internal ``_pytest.terminal.flatten`` function in favor of ``more_itertools.collapse``.
|
|
@ -6,6 +6,7 @@ Release announcements
|
|||
:maxdepth: 2
|
||||
|
||||
|
||||
release-3.5.0
|
||||
release-3.4.2
|
||||
release-3.4.1
|
||||
release-3.4.0
|
||||
|
|
|
@ -0,0 +1,51 @@
|
|||
pytest-3.5.0
|
||||
=======================================
|
||||
|
||||
The pytest team is proud to announce the 3.5.0 release!
|
||||
|
||||
pytest is a mature Python testing tool with more than a 1600 tests
|
||||
against itself, passing on many different interpreters and platforms.
|
||||
|
||||
This release contains a number of bugs fixes and improvements, so users are encouraged
|
||||
to take a look at the CHANGELOG:
|
||||
|
||||
http://doc.pytest.org/en/latest/changelog.html
|
||||
|
||||
For complete documentation, please visit:
|
||||
|
||||
http://docs.pytest.org
|
||||
|
||||
As usual, you can upgrade from pypi via:
|
||||
|
||||
pip install -U pytest
|
||||
|
||||
Thanks to all who contributed to this release, among them:
|
||||
|
||||
* Allan Feldman
|
||||
* Brian Maissy
|
||||
* Bruno Oliveira
|
||||
* Carlos Jenkins
|
||||
* Daniel Hahler
|
||||
* Florian Bruhin
|
||||
* Jason R. Coombs
|
||||
* Jeffrey Rackauckas
|
||||
* Jordan Speicher
|
||||
* Julien Palard
|
||||
* Kale Kundert
|
||||
* Kostis Anagnostopoulos
|
||||
* Kyle Altendorf
|
||||
* Maik Figura
|
||||
* Pedro Algarvio
|
||||
* Ronny Pfannschmidt
|
||||
* Tadeu Manoel
|
||||
* Tareq Alayan
|
||||
* Thomas Hisch
|
||||
* William Lee
|
||||
* codetriage-readme-bot
|
||||
* feuillemorte
|
||||
* joshm91
|
||||
* mike
|
||||
|
||||
|
||||
Happy testing,
|
||||
The Pytest Development Team
|
|
@ -15,13 +15,109 @@ For information on the ``pytest.mark`` mechanism, see :ref:`mark`.
|
|||
For information about fixtures, see :ref:`fixtures`. To see a complete list of available fixtures, type::
|
||||
|
||||
$ pytest -q --fixtures
|
||||
|
||||
cache
|
||||
Return a cache object that can persist state between testing sessions.
|
||||
|
||||
cache.get(key, default)
|
||||
cache.set(key, value)
|
||||
|
||||
Keys must be a ``/`` separated value, where the first part is usually the
|
||||
name of your plugin or application to avoid clashes with other cache users.
|
||||
|
||||
Values can be any object handled by the json stdlib module.
|
||||
capsys
|
||||
Enable capturing of writes to ``sys.stdout`` and ``sys.stderr`` and make
|
||||
captured output available via ``capsys.readouterr()`` method calls
|
||||
which return a ``(out, err)`` namedtuple. ``out`` and ``err`` will be ``text``
|
||||
objects.
|
||||
capsysbinary
|
||||
Enable capturing of writes to ``sys.stdout`` and ``sys.stderr`` and make
|
||||
captured output available via ``capsys.readouterr()`` method calls
|
||||
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be ``bytes``
|
||||
objects.
|
||||
capfd
|
||||
Enable capturing of writes to file descriptors ``1`` and ``2`` and make
|
||||
captured output available via ``capfd.readouterr()`` method calls
|
||||
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be ``text``
|
||||
objects.
|
||||
capfdbinary
|
||||
Enable capturing of write to file descriptors 1 and 2 and make
|
||||
captured output available via ``capfdbinary.readouterr`` method calls
|
||||
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be
|
||||
``bytes`` objects.
|
||||
doctest_namespace
|
||||
Fixture that returns a :py:class:`dict` that will be injected into the namespace of doctests.
|
||||
pytestconfig
|
||||
Session-scoped fixture that returns the :class:`_pytest.config.Config` object.
|
||||
|
||||
Example::
|
||||
|
||||
def test_foo(pytestconfig):
|
||||
if pytestconfig.getoption("verbose"):
|
||||
...
|
||||
record_property
|
||||
Add an extra properties the calling test.
|
||||
User properties become part of the test report and are available to the
|
||||
configured reporters, like JUnit XML.
|
||||
The fixture is callable with ``(name, value)``, with value being automatically
|
||||
xml-encoded.
|
||||
|
||||
Example::
|
||||
|
||||
def test_function(record_property):
|
||||
record_property("example_key", 1)
|
||||
record_xml_property
|
||||
(Deprecated) use record_property.
|
||||
record_xml_attribute
|
||||
Add extra xml attributes to the tag for the calling test.
|
||||
The fixture is callable with ``(name, value)``, with value being
|
||||
automatically xml-encoded
|
||||
caplog
|
||||
Access and control log capturing.
|
||||
|
||||
Captured logs are available through the following methods::
|
||||
|
||||
* caplog.text() -> string containing formatted log output
|
||||
* caplog.records() -> list of logging.LogRecord instances
|
||||
* caplog.record_tuples() -> list of (logger_name, level, message) tuples
|
||||
* caplog.clear() -> clear captured records and formatted log output string
|
||||
monkeypatch
|
||||
The returned ``monkeypatch`` fixture provides these
|
||||
helper methods to modify objects, dictionaries or os.environ::
|
||||
|
||||
monkeypatch.setattr(obj, name, value, raising=True)
|
||||
monkeypatch.delattr(obj, name, raising=True)
|
||||
monkeypatch.setitem(mapping, name, value)
|
||||
monkeypatch.delitem(obj, name, raising=True)
|
||||
monkeypatch.setenv(name, value, prepend=False)
|
||||
monkeypatch.delenv(name, value, raising=True)
|
||||
monkeypatch.syspath_prepend(path)
|
||||
monkeypatch.chdir(path)
|
||||
|
||||
All modifications will be undone after the requesting
|
||||
test function or fixture has finished. The ``raising``
|
||||
parameter determines if a KeyError or AttributeError
|
||||
will be raised if the set/deletion operation has no target.
|
||||
recwarn
|
||||
Return a :class:`WarningsRecorder` instance that records all warnings emitted by test functions.
|
||||
|
||||
See http://docs.python.org/library/warnings.html for information
|
||||
on warning categories.
|
||||
tmpdir_factory
|
||||
Return a TempdirFactory instance for the test session.
|
||||
tmpdir
|
||||
Return a temporary directory path object
|
||||
which is unique to each test function invocation,
|
||||
created as a sub directory of the base temporary
|
||||
directory. The returned object is a `py.path.local`_
|
||||
path object.
|
||||
|
||||
.. _`py.path.local`: https://py.readthedocs.io/en/latest/path.html
|
||||
|
||||
no tests ran in 0.12 seconds
|
||||
|
||||
You can also interactively ask for help, e.g. by typing on the Python interactive prompt something like::
|
||||
|
||||
import pytest
|
||||
help(pytest)
|
||||
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -78,7 +78,7 @@ If you then run it with ``--lf``::
|
|||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collected 50 items
|
||||
collected 50 items / 48 deselected
|
||||
run-last-failure: rerun previous 2 failures
|
||||
|
||||
test_50.py FF [100%]
|
||||
|
@ -106,7 +106,6 @@ If you then run it with ``--lf``::
|
|||
E Failed: bad luck
|
||||
|
||||
test_50.py:6: Failed
|
||||
=========================== 48 tests deselected ============================
|
||||
================= 2 failed, 48 deselected in 0.12 seconds ==================
|
||||
|
||||
You have run only the two failing test from the last run, while 48 tests have
|
||||
|
@ -152,6 +151,20 @@ of ``FF`` and dots)::
|
|||
|
||||
.. _`config.cache`:
|
||||
|
||||
New ``--nf``, ``--new-first`` options: run new tests first followed by the rest
|
||||
of the tests, in both cases tests are also sorted by the file modified time,
|
||||
with more recent files coming first.
|
||||
|
||||
Behavior when no tests failed in the last run
|
||||
---------------------------------------------
|
||||
|
||||
When no tests failed in the last run, or when no cached ``lastfailed`` data was
|
||||
found, ``pytest`` can be configured either to run all of the tests or no tests,
|
||||
using the ``--last-failed-no-failures`` option, which takes one of the following values::
|
||||
|
||||
pytest --last-failed-no-failures all # run all tests (default behavior)
|
||||
pytest --last-failed-no-failures none # run no tests and exit
|
||||
|
||||
The new config.cache object
|
||||
--------------------------------
|
||||
|
||||
|
@ -229,6 +242,8 @@ You can always peek at the content of the cache using the
|
|||
------------------------------- cache values -------------------------------
|
||||
cache/lastfailed contains:
|
||||
{'test_caching.py::test_function': True}
|
||||
cache/nodeids contains:
|
||||
['test_caching.py::test_function']
|
||||
example/value contains:
|
||||
42
|
||||
|
||||
|
|
|
@ -9,7 +9,8 @@ Default stdout/stderr/stdin capturing behaviour
|
|||
|
||||
During test execution any output sent to ``stdout`` and ``stderr`` is
|
||||
captured. If a test or a setup method fails its according captured
|
||||
output will usually be shown along with the failure traceback.
|
||||
output will usually be shown along with the failure traceback. (this
|
||||
behavior can be configured by the ``--show-capture`` command-line option).
|
||||
|
||||
In addition, ``stdin`` is set to a "null" object which will
|
||||
fail on attempts to read from it because it is rarely desired
|
||||
|
|
|
@ -38,6 +38,10 @@ Here's a summary what ``pytest`` uses ``rootdir`` for:
|
|||
Important to emphasize that ``rootdir`` is **NOT** used to modify ``sys.path``/``PYTHONPATH`` or
|
||||
influence how modules are imported. See :ref:`pythonpath` for more details.
|
||||
|
||||
``--rootdir=path`` command-line option can be used to force a specific directory.
|
||||
The directory passed may contain environment variables when it is used in conjunction
|
||||
with ``addopts`` in a ``pytest.ini`` file.
|
||||
|
||||
Finding the ``rootdir``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
|
|
@ -115,6 +115,11 @@ itself::
|
|||
>>> get_unicode_greeting() # doctest: +ALLOW_UNICODE
|
||||
'Hello'
|
||||
|
||||
By default, pytest would report only the first failure for a given doctest. If
|
||||
you want to continue the test even when you have failures, do::
|
||||
|
||||
pytest --doctest-modules --doctest-continue-on-failure
|
||||
|
||||
|
||||
.. _`doctest_namespace`:
|
||||
|
||||
|
|
|
@ -34,11 +34,10 @@ You can then restrict a test run to only run tests marked with ``webtest``::
|
|||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||
cachedir: .pytest_cache
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collecting ... collected 4 items
|
||||
collecting ... collected 4 items / 3 deselected
|
||||
|
||||
test_server.py::test_send_http PASSED [100%]
|
||||
|
||||
============================ 3 tests deselected ============================
|
||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||
|
||||
Or the inverse, running all tests except the webtest ones::
|
||||
|
@ -48,13 +47,12 @@ Or the inverse, running all tests except the webtest ones::
|
|||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||
cachedir: .pytest_cache
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collecting ... collected 4 items
|
||||
collecting ... collected 4 items / 1 deselected
|
||||
|
||||
test_server.py::test_something_quick PASSED [ 33%]
|
||||
test_server.py::test_another PASSED [ 66%]
|
||||
test_server.py::TestClass::test_method PASSED [100%]
|
||||
|
||||
============================ 1 tests deselected ============================
|
||||
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
||||
|
||||
Selecting tests based on their node ID
|
||||
|
@ -133,11 +131,10 @@ select tests based on their names::
|
|||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||
cachedir: .pytest_cache
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collecting ... collected 4 items
|
||||
collecting ... collected 4 items / 3 deselected
|
||||
|
||||
test_server.py::test_send_http PASSED [100%]
|
||||
|
||||
============================ 3 tests deselected ============================
|
||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||
|
||||
And you can also run all tests except the ones that match the keyword::
|
||||
|
@ -147,13 +144,12 @@ And you can also run all tests except the ones that match the keyword::
|
|||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||
cachedir: .pytest_cache
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collecting ... collected 4 items
|
||||
collecting ... collected 4 items / 1 deselected
|
||||
|
||||
test_server.py::test_something_quick PASSED [ 33%]
|
||||
test_server.py::test_another PASSED [ 66%]
|
||||
test_server.py::TestClass::test_method PASSED [100%]
|
||||
|
||||
============================ 1 tests deselected ============================
|
||||
================== 3 passed, 1 deselected in 0.12 seconds ==================
|
||||
|
||||
Or to select "http" and "quick" tests::
|
||||
|
@ -163,12 +159,11 @@ Or to select "http" and "quick" tests::
|
|||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y -- $PYTHON_PREFIX/bin/python3.5
|
||||
cachedir: .pytest_cache
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collecting ... collected 4 items
|
||||
collecting ... collected 4 items / 2 deselected
|
||||
|
||||
test_server.py::test_send_http PASSED [ 50%]
|
||||
test_server.py::test_something_quick PASSED [100%]
|
||||
|
||||
============================ 2 tests deselected ============================
|
||||
================== 2 passed, 2 deselected in 0.12 seconds ==================
|
||||
|
||||
.. note::
|
||||
|
@ -547,11 +542,10 @@ Note that if you specify a platform via the marker-command line option like this
|
|||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collected 4 items
|
||||
collected 4 items / 3 deselected
|
||||
|
||||
test_plat.py . [100%]
|
||||
|
||||
============================ 3 tests deselected ============================
|
||||
================== 1 passed, 3 deselected in 0.12 seconds ==================
|
||||
|
||||
then the unmarked-tests will not be run. It is thus a way to restrict the run to the specific tests.
|
||||
|
@ -599,7 +593,7 @@ We can now use the ``-m option`` to select one set::
|
|||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collected 4 items
|
||||
collected 4 items / 2 deselected
|
||||
|
||||
test_module.py FF [100%]
|
||||
|
||||
|
@ -612,7 +606,6 @@ We can now use the ``-m option`` to select one set::
|
|||
test_module.py:6: in test_interface_complex
|
||||
assert 0
|
||||
E assert 0
|
||||
============================ 2 tests deselected ============================
|
||||
================== 2 failed, 2 deselected in 0.12 seconds ==================
|
||||
|
||||
or to select both "event" and "interface" tests::
|
||||
|
@ -621,7 +614,7 @@ or to select both "event" and "interface" tests::
|
|||
=========================== test session starts ============================
|
||||
platform linux -- Python 3.x.y, pytest-3.x.y, py-1.x.y, pluggy-0.x.y
|
||||
rootdir: $REGENDOC_TMPDIR, inifile:
|
||||
collected 4 items
|
||||
collected 4 items / 1 deselected
|
||||
|
||||
test_module.py FFF [100%]
|
||||
|
||||
|
@ -638,5 +631,4 @@ or to select both "event" and "interface" tests::
|
|||
test_module.py:9: in test_event_simple
|
||||
assert 0
|
||||
E assert 0
|
||||
============================ 1 tests deselected ============================
|
||||
================== 3 failed, 1 deselected in 0.12 seconds ==================
|
||||
|
|
|
@ -39,6 +39,14 @@ you will see that ``pytest`` only collects test-modules, which do not match the
|
|||
|
||||
======= 5 passed in 0.02 seconds =======
|
||||
|
||||
Deselect tests during test collection
|
||||
-------------------------------------
|
||||
|
||||
Tests can individually be deselected during collection by passing the ``--deselect=item`` option.
|
||||
For example, say ``tests/foobar/test_foobar_01.py`` contains ``test_a`` and ``test_b``.
|
||||
You can run all of the tests within ``tests/`` *except* for ``tests/foobar/test_foobar_01.py::test_a``
|
||||
by invoking ``pytest`` with ``--deselect tests/foobar/test_foobar_01.py::test_a``.
|
||||
``pytest`` allows multiple ``--deselect`` options.
|
||||
|
||||
Keeping duplicate paths specified from command line
|
||||
----------------------------------------------------
|
||||
|
|
|
@ -358,7 +358,7 @@ get on the terminal - we are working on that)::
|
|||
> int(s)
|
||||
E ValueError: invalid literal for int() with base 10: 'qwe'
|
||||
|
||||
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:595>:1: ValueError
|
||||
<0-codegen $PYTHON_PREFIX/lib/python3.5/site-packages/_pytest/python_api.py:609>:1: ValueError
|
||||
______________________ TestRaises.test_raises_doesnt _______________________
|
||||
|
||||
self = <failure_demo.TestRaises object at 0xdeadbeef>
|
||||
|
|
|
@ -389,7 +389,7 @@ Now we can profile which test functions execute the slowest::
|
|||
========================= slowest 3 test durations =========================
|
||||
0.30s call test_some_are_slow.py::test_funcslow2
|
||||
0.20s call test_some_are_slow.py::test_funcslow1
|
||||
0.10s call test_some_are_slow.py::test_funcfast
|
||||
0.16s call test_some_are_slow.py::test_funcfast
|
||||
========================= 3 passed in 0.12 seconds =========================
|
||||
|
||||
incremental testing - test steps
|
||||
|
@ -451,9 +451,6 @@ If we run this::
|
|||
collected 4 items
|
||||
|
||||
test_step.py .Fx. [100%]
|
||||
========================= short test summary info ==========================
|
||||
XFAIL test_step.py::TestUserHandling::()::test_deletion
|
||||
reason: previous test failed (test_modification)
|
||||
|
||||
================================= FAILURES =================================
|
||||
____________________ TestUserHandling.test_modification ____________________
|
||||
|
@ -465,6 +462,9 @@ If we run this::
|
|||
E assert 0
|
||||
|
||||
test_step.py:9: AssertionError
|
||||
========================= short test summary info ==========================
|
||||
XFAIL test_step.py::TestUserHandling::()::test_deletion
|
||||
reason: previous test failed (test_modification)
|
||||
============== 1 failed, 2 passed, 1 xfailed in 0.12 seconds ===============
|
||||
|
||||
We'll see that ``test_deletion`` was not executed because ``test_modification``
|
||||
|
@ -539,7 +539,7 @@ We can run this::
|
|||
file $REGENDOC_TMPDIR/b/test_error.py, line 1
|
||||
def test_root(db): # no db here, will error out
|
||||
E fixture 'db' not found
|
||||
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_xml_attribute, record_xml_property, recwarn, tmpdir, tmpdir_factory
|
||||
> available fixtures: cache, capfd, capfdbinary, caplog, capsys, capsysbinary, doctest_namespace, monkeypatch, pytestconfig, record_property, record_xml_attribute, record_xml_property, recwarn, tmpdir, tmpdir_factory
|
||||
> use 'pytest --fixtures [testpath]' for help on them.
|
||||
|
||||
$REGENDOC_TMPDIR/b/test_error.py:1
|
||||
|
|
|
@ -256,6 +256,50 @@ instance, you can simply declare it:
|
|||
|
||||
Finally, the ``class`` scope will invoke the fixture once per test *class*.
|
||||
|
||||
|
||||
Higher-scoped fixtures are instantiated first
|
||||
---------------------------------------------
|
||||
|
||||
.. versionadded:: 3.5
|
||||
|
||||
Within a function request for features, fixture of higher-scopes (such as ``session``) are instantiated first than
|
||||
lower-scoped fixtures (such as ``function`` or ``class``). The relative order of fixtures of same scope follows
|
||||
the declared order in the test function and honours dependencies between fixtures.
|
||||
|
||||
Consider the code below:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def s1():
|
||||
pass
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def m1():
|
||||
pass
|
||||
|
||||
@pytest.fixture
|
||||
def f1(tmpdir):
|
||||
pass
|
||||
|
||||
@pytest.fixture
|
||||
def f2():
|
||||
pass
|
||||
|
||||
def test_foo(f1, m1, f2, s1):
|
||||
...
|
||||
|
||||
|
||||
The fixtures requested by ``test_foo`` will be instantiated in the following order:
|
||||
|
||||
1. ``s1``: is the highest-scoped fixture (``session``).
|
||||
2. ``m1``: is the second highest-scoped fixture (``module``).
|
||||
3. ``tempdir``: is a ``function``-scoped fixture, required by ``f1``: it needs to be instantiated at this point
|
||||
because it is a dependency of ``f1``.
|
||||
4. ``f1``: is the first ``function``-scoped fixture in ``test_foo`` parameter list.
|
||||
5. ``f2``: is the last ``function``-scoped fixture in ``test_foo`` parameter list.
|
||||
|
||||
|
||||
.. _`finalization`:
|
||||
|
||||
Fixture finalization / executing teardown code
|
||||
|
@ -696,11 +740,11 @@ Let's run the tests in verbose mode and with looking at the print-output::
|
|||
test_module.py::test_1[mod1] SETUP modarg mod1
|
||||
RUN test1 with modarg mod1
|
||||
PASSED
|
||||
test_module.py::test_2[1-mod1] SETUP otherarg 1
|
||||
test_module.py::test_2[mod1-1] SETUP otherarg 1
|
||||
RUN test2 with otherarg 1 and modarg mod1
|
||||
PASSED TEARDOWN otherarg 1
|
||||
|
||||
test_module.py::test_2[2-mod1] SETUP otherarg 2
|
||||
test_module.py::test_2[mod1-2] SETUP otherarg 2
|
||||
RUN test2 with otherarg 2 and modarg mod1
|
||||
PASSED TEARDOWN otherarg 2
|
||||
|
||||
|
@ -708,11 +752,11 @@ Let's run the tests in verbose mode and with looking at the print-output::
|
|||
SETUP modarg mod2
|
||||
RUN test1 with modarg mod2
|
||||
PASSED
|
||||
test_module.py::test_2[1-mod2] SETUP otherarg 1
|
||||
test_module.py::test_2[mod2-1] SETUP otherarg 1
|
||||
RUN test2 with otherarg 1 and modarg mod2
|
||||
PASSED TEARDOWN otherarg 1
|
||||
|
||||
test_module.py::test_2[2-mod2] SETUP otherarg 2
|
||||
test_module.py::test_2[mod2-2] SETUP otherarg 2
|
||||
RUN test2 with otherarg 2 and modarg mod2
|
||||
PASSED TEARDOWN otherarg 2
|
||||
TEARDOWN modarg mod2
|
||||
|
|
|
@ -50,26 +50,10 @@ These options can also be customized through ``pytest.ini`` file:
|
|||
log_format = %(asctime)s %(levelname)s %(message)s
|
||||
log_date_format = %Y-%m-%d %H:%M:%S
|
||||
|
||||
Further it is possible to disable reporting logs on failed tests completely
|
||||
with::
|
||||
Further it is possible to disable reporting of captured content (stdout,
|
||||
stderr and logs) on failed tests completely with::
|
||||
|
||||
pytest --no-print-logs
|
||||
|
||||
Or in the ``pytest.ini`` file:
|
||||
|
||||
.. code-block:: ini
|
||||
|
||||
[pytest]
|
||||
log_print = False
|
||||
|
||||
|
||||
Shows failed tests in the normal manner as no logs were captured::
|
||||
|
||||
----------------------- Captured stdout call ----------------------
|
||||
text going to stdout
|
||||
----------------------- Captured stderr call ----------------------
|
||||
text going to stderr
|
||||
==================== 2 failed in 0.02 seconds =====================
|
||||
pytest --show-capture=no
|
||||
|
||||
|
||||
caplog fixture
|
||||
|
|
|
@ -80,6 +80,12 @@ will be loaded as well.
|
|||
|
||||
which will import the specified module as a ``pytest`` plugin.
|
||||
|
||||
.. note::
|
||||
Requiring plugins using a ``pytest_plugins`` variable in non-root
|
||||
``conftest.py`` files is deprecated. See
|
||||
:ref:`full explanation <requiring plugins in non-root conftests>`
|
||||
in the Writing plugins section.
|
||||
|
||||
.. _`findpluginname`:
|
||||
|
||||
Finding out which plugins are active
|
||||
|
|
|
@ -388,12 +388,12 @@ pytestconfig
|
|||
.. autofunction:: _pytest.fixtures.pytestconfig()
|
||||
|
||||
|
||||
record_xml_property
|
||||
record_property
|
||||
~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
**Tutorial**: :ref:`record_xml_property example`.
|
||||
**Tutorial**: :ref:`record_property example`.
|
||||
|
||||
.. autofunction:: _pytest.junitxml.record_xml_property()
|
||||
.. autofunction:: _pytest.junitxml.record_property()
|
||||
|
||||
caplog
|
||||
~~~~~~
|
||||
|
|
|
@ -51,7 +51,6 @@ Running this would result in a passed test except for the last
|
|||
test_tmpdir.py:7: AssertionError
|
||||
========================= 1 failed in 0.12 seconds =========================
|
||||
|
||||
|
||||
.. _`tmpdir factory example`:
|
||||
|
||||
The 'tmpdir_factory' fixture
|
||||
|
|
|
@ -220,21 +220,26 @@ To set the name of the root test suite xml item, you can configure the ``junit_s
|
|||
[pytest]
|
||||
junit_suite_name = my_suite
|
||||
|
||||
.. _record_xml_property example:
|
||||
.. _record_property example:
|
||||
|
||||
record_xml_property
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
record_property
|
||||
^^^^^^^^^^^^^^^
|
||||
|
||||
.. versionadded:: 2.8
|
||||
.. versionchanged:: 3.5
|
||||
|
||||
Fixture renamed from ``record_xml_property`` to ``record_property`` as user
|
||||
properties are now available to all reporters.
|
||||
``record_xml_property`` is now deprecated.
|
||||
|
||||
If you want to log additional information for a test, you can use the
|
||||
``record_xml_property`` fixture:
|
||||
``record_property`` fixture:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
def test_function(record_xml_property):
|
||||
record_xml_property("example_key", 1)
|
||||
assert 0
|
||||
def test_function(record_property):
|
||||
record_property("example_key", 1)
|
||||
assert True
|
||||
|
||||
This will add an extra property ``example_key="1"`` to the generated
|
||||
``testcase`` tag:
|
||||
|
@ -247,13 +252,42 @@ This will add an extra property ``example_key="1"`` to the generated
|
|||
</properties>
|
||||
</testcase>
|
||||
|
||||
Alternatively, you can integrate this functionality with custom markers:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# content of conftest.py
|
||||
|
||||
def pytest_collection_modifyitems(session, config, items):
|
||||
for item in items:
|
||||
marker = item.get_marker('test_id')
|
||||
if marker is not None:
|
||||
test_id = marker.args[0]
|
||||
item.user_properties.append(('test_id', test_id))
|
||||
|
||||
And in your tests:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
# content of test_function.py
|
||||
import pytest
|
||||
@pytest.mark.test_id(1501)
|
||||
def test_function():
|
||||
assert True
|
||||
|
||||
Will result in:
|
||||
|
||||
.. code-block:: xml
|
||||
|
||||
<testcase classname="test_function" file="test_function.py" line="0" name="test_function" time="0.0009">
|
||||
<properties>
|
||||
<property name="test_id" value="1501" />
|
||||
</properties>
|
||||
</testcase>
|
||||
|
||||
.. warning::
|
||||
|
||||
``record_xml_property`` is an experimental feature, and its interface might be replaced
|
||||
by something more powerful and general in future versions. The
|
||||
functionality per-se will be kept, however.
|
||||
|
||||
Currently it does not work when used with the ``pytest-xdist`` plugin.
|
||||
``record_property`` is an experimental feature and may change in the future.
|
||||
|
||||
Also please note that using this feature will break any schema verification.
|
||||
This might be a problem when used with some CI servers.
|
||||
|
@ -274,7 +308,7 @@ To add an additional xml attribute to a testcase element, you can use
|
|||
print('hello world')
|
||||
assert True
|
||||
|
||||
Unlike ``record_xml_property``, this will not add a new child element.
|
||||
Unlike ``record_property``, this will not add a new child element.
|
||||
Instead, this will add an attribute ``assertions="REQ-1234"`` inside the generated
|
||||
``testcase`` tag and override the default ``classname`` with ``"classname=custom_classname"``:
|
||||
|
||||
|
@ -448,7 +482,7 @@ Running it will show that ``MyPlugin`` was added and its
|
|||
hook was invoked::
|
||||
|
||||
$ python myinvoke.py
|
||||
*** test run reporting finishing
|
||||
. [100%]*** test run reporting finishing
|
||||
|
||||
|
||||
.. note::
|
||||
|
|
|
@ -257,6 +257,18 @@ application modules:
|
|||
if ``myapp.testsupport.myplugin`` also declares ``pytest_plugins``, the contents
|
||||
of the variable will also be loaded as plugins, and so on.
|
||||
|
||||
.. _`requiring plugins in non-root conftests`:
|
||||
|
||||
.. note::
|
||||
Requiring plugins using a ``pytest_plugins`` variable in non-root
|
||||
``conftest.py`` files is deprecated.
|
||||
|
||||
This is important because ``conftest.py`` files implement per-directory
|
||||
hook implementations, but once a plugin is imported, it will affect the
|
||||
entire directory tree. In order to avoid confusion, defining
|
||||
``pytest_plugins`` in any ``conftest.py`` file which is not located in the
|
||||
tests root directory is deprecated, and will raise a warning.
|
||||
|
||||
This mechanism makes it easy to share fixtures within applications or even
|
||||
external applications without the need to create external plugins using
|
||||
the ``setuptools``'s entry point technique.
|
||||
|
|
5
setup.py
5
setup.py
|
@ -59,7 +59,8 @@ def main():
|
|||
'py>=1.5.0',
|
||||
'six>=1.10.0',
|
||||
'setuptools',
|
||||
'attrs>=17.2.0',
|
||||
'attrs>=17.4.0',
|
||||
'more_itertools>=4.0.0',
|
||||
]
|
||||
# if _PYTEST_SETUP_SKIP_PLUGGY_DEP is set, skip installing pluggy;
|
||||
# used by tox.ini to test with pluggy master
|
||||
|
@ -101,7 +102,7 @@ def main():
|
|||
python_requires='>=2.7, !=3.0.*, !=3.1.*, !=3.2.*, !=3.3.*',
|
||||
install_requires=install_requires,
|
||||
extras_require=extras_require,
|
||||
packages=['_pytest', '_pytest.assertion', '_pytest._code'],
|
||||
packages=['_pytest', '_pytest.assertion', '_pytest._code', '_pytest.mark'],
|
||||
py_modules=['pytest'],
|
||||
zip_safe=False,
|
||||
)
|
||||
|
|
|
@ -964,3 +964,27 @@ def test_fixture_values_leak(testdir):
|
|||
""")
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(['* 2 passed *'])
|
||||
|
||||
|
||||
def test_fixture_order_respects_scope(testdir):
|
||||
"""Ensure that fixtures are created according to scope order, regression test for #2405
|
||||
"""
|
||||
testdir.makepyfile('''
|
||||
import pytest
|
||||
|
||||
data = {}
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def clean_data():
|
||||
data.clear()
|
||||
|
||||
@pytest.fixture(autouse=True)
|
||||
def add_data():
|
||||
data.update(value=True)
|
||||
|
||||
@pytest.mark.usefixtures('clean_data')
|
||||
def test_value():
|
||||
assert data.get('value')
|
||||
''')
|
||||
result = testdir.runpytest()
|
||||
assert result.ret == 0
|
||||
|
|
|
@ -48,6 +48,15 @@ def test_pytest_setup_cfg_deprecated(testdir):
|
|||
result.stdout.fnmatch_lines(['*pytest*section in setup.cfg files is deprecated*use*tool:pytest*instead*'])
|
||||
|
||||
|
||||
def test_pytest_custom_cfg_deprecated(testdir):
|
||||
testdir.makefile('.cfg', custom='''
|
||||
[pytest]
|
||||
addopts = --verbose
|
||||
''')
|
||||
result = testdir.runpytest("-c", "custom.cfg")
|
||||
result.stdout.fnmatch_lines(['*pytest*section in custom.cfg files is deprecated*use*tool:pytest*instead*'])
|
||||
|
||||
|
||||
def test_str_args_deprecated(tmpdir, testdir):
|
||||
"""Deprecate passing strings to pytest.main(). Scheduled for removal in pytest-4.0."""
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
|
@ -125,3 +134,70 @@ def test_pytest_catchlog_deprecated(testdir, plugin):
|
|||
"*pytest-*log plugin has been merged into the core*",
|
||||
"*1 passed, 1 warnings*",
|
||||
])
|
||||
|
||||
|
||||
def test_pytest_plugins_in_non_top_level_conftest_deprecated(testdir):
|
||||
from _pytest.deprecated import PYTEST_PLUGINS_FROM_NON_TOP_LEVEL_CONFTEST
|
||||
subdirectory = testdir.tmpdir.join("subdirectory")
|
||||
subdirectory.mkdir()
|
||||
# create the inner conftest with makeconftest and then move it to the subdirectory
|
||||
testdir.makeconftest("""
|
||||
pytest_plugins=['capture']
|
||||
""")
|
||||
testdir.tmpdir.join("conftest.py").move(subdirectory.join("conftest.py"))
|
||||
# make the top level conftest
|
||||
testdir.makeconftest("""
|
||||
import warnings
|
||||
warnings.filterwarnings('always', category=DeprecationWarning)
|
||||
""")
|
||||
testdir.makepyfile("""
|
||||
def test_func():
|
||||
pass
|
||||
""")
|
||||
res = testdir.runpytest_subprocess()
|
||||
assert res.ret == 0
|
||||
res.stderr.fnmatch_lines('*' + str(PYTEST_PLUGINS_FROM_NON_TOP_LEVEL_CONFTEST).splitlines()[0])
|
||||
|
||||
|
||||
def test_pytest_plugins_in_non_top_level_conftest_deprecated_no_top_level_conftest(testdir):
|
||||
from _pytest.deprecated import PYTEST_PLUGINS_FROM_NON_TOP_LEVEL_CONFTEST
|
||||
subdirectory = testdir.tmpdir.join('subdirectory')
|
||||
subdirectory.mkdir()
|
||||
testdir.makeconftest("""
|
||||
import warnings
|
||||
warnings.filterwarnings('always', category=DeprecationWarning)
|
||||
pytest_plugins=['capture']
|
||||
""")
|
||||
testdir.tmpdir.join("conftest.py").move(subdirectory.join("conftest.py"))
|
||||
|
||||
testdir.makepyfile("""
|
||||
def test_func():
|
||||
pass
|
||||
""")
|
||||
|
||||
res = testdir.runpytest_subprocess()
|
||||
assert res.ret == 0
|
||||
res.stderr.fnmatch_lines('*' + str(PYTEST_PLUGINS_FROM_NON_TOP_LEVEL_CONFTEST).splitlines()[0])
|
||||
|
||||
|
||||
def test_pytest_plugins_in_non_top_level_conftest_deprecated_no_false_positives(testdir):
|
||||
from _pytest.deprecated import PYTEST_PLUGINS_FROM_NON_TOP_LEVEL_CONFTEST
|
||||
subdirectory = testdir.tmpdir.join('subdirectory')
|
||||
subdirectory.mkdir()
|
||||
testdir.makeconftest("""
|
||||
pass
|
||||
""")
|
||||
testdir.tmpdir.join("conftest.py").move(subdirectory.join("conftest.py"))
|
||||
|
||||
testdir.makeconftest("""
|
||||
import warnings
|
||||
warnings.filterwarnings('always', category=DeprecationWarning)
|
||||
pytest_plugins=['capture']
|
||||
""")
|
||||
testdir.makepyfile("""
|
||||
def test_func():
|
||||
pass
|
||||
""")
|
||||
res = testdir.runpytest_subprocess()
|
||||
assert res.ret == 0
|
||||
assert str(PYTEST_PLUGINS_FROM_NON_TOP_LEVEL_CONFTEST).splitlines()[0] not in res.stderr.str()
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
# -*- coding: utf-8 -*-
|
||||
import re
|
||||
import os
|
||||
|
||||
import six
|
||||
|
@ -161,6 +162,7 @@ def test_log_cli_enabled_disabled(testdir, enabled):
|
|||
if enabled:
|
||||
result.stdout.fnmatch_lines([
|
||||
'test_log_cli_enabled_disabled.py::test_log_cli ',
|
||||
'*-- live log call --*',
|
||||
'test_log_cli_enabled_disabled.py* CRITICAL critical message logged by test',
|
||||
'PASSED*',
|
||||
])
|
||||
|
@ -226,8 +228,20 @@ def test_log_cli_default_level_multiple_tests(testdir, request):
|
|||
|
||||
|
||||
def test_log_cli_default_level_sections(testdir, request):
|
||||
"""Check that with live logging enable we are printing the correct headers during setup/call/teardown."""
|
||||
"""Check that with live logging enable we are printing the correct headers during
|
||||
start/setup/call/teardown/finish."""
|
||||
filename = request.node.name + '.py'
|
||||
testdir.makeconftest('''
|
||||
import pytest
|
||||
import logging
|
||||
|
||||
def pytest_runtest_logstart():
|
||||
logging.warning('>>>>> START >>>>>')
|
||||
|
||||
def pytest_runtest_logfinish():
|
||||
logging.warning('<<<<< END <<<<<<<')
|
||||
''')
|
||||
|
||||
testdir.makepyfile('''
|
||||
import pytest
|
||||
import logging
|
||||
|
@ -252,6 +266,8 @@ def test_log_cli_default_level_sections(testdir, request):
|
|||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines([
|
||||
'{}::test_log_1 '.format(filename),
|
||||
'*-- live log start --*',
|
||||
'*WARNING* >>>>> START >>>>>*',
|
||||
'*-- live log setup --*',
|
||||
'*WARNING*log message from setup of test_log_1*',
|
||||
'*-- live log call --*',
|
||||
|
@ -259,8 +275,12 @@ def test_log_cli_default_level_sections(testdir, request):
|
|||
'PASSED *50%*',
|
||||
'*-- live log teardown --*',
|
||||
'*WARNING*log message from teardown of test_log_1*',
|
||||
'*-- live log finish --*',
|
||||
'*WARNING* <<<<< END <<<<<<<*',
|
||||
|
||||
'{}::test_log_2 '.format(filename),
|
||||
'*-- live log start --*',
|
||||
'*WARNING* >>>>> START >>>>>*',
|
||||
'*-- live log setup --*',
|
||||
'*WARNING*log message from setup of test_log_2*',
|
||||
'*-- live log call --*',
|
||||
|
@ -268,6 +288,8 @@ def test_log_cli_default_level_sections(testdir, request):
|
|||
'PASSED *100%*',
|
||||
'*-- live log teardown --*',
|
||||
'*WARNING*log message from teardown of test_log_2*',
|
||||
'*-- live log finish --*',
|
||||
'*WARNING* <<<<< END <<<<<<<*',
|
||||
'=* 2 passed in *=',
|
||||
])
|
||||
|
||||
|
@ -326,6 +348,64 @@ def test_live_logs_unknown_sections(testdir, request):
|
|||
])
|
||||
|
||||
|
||||
def test_sections_single_new_line_after_test_outcome(testdir, request):
|
||||
"""Check that only a single new line is written between log messages during
|
||||
teardown/finish."""
|
||||
filename = request.node.name + '.py'
|
||||
testdir.makeconftest('''
|
||||
import pytest
|
||||
import logging
|
||||
|
||||
def pytest_runtest_logstart():
|
||||
logging.warning('>>>>> START >>>>>')
|
||||
|
||||
def pytest_runtest_logfinish():
|
||||
logging.warning('<<<<< END <<<<<<<')
|
||||
logging.warning('<<<<< END <<<<<<<')
|
||||
''')
|
||||
|
||||
testdir.makepyfile('''
|
||||
import pytest
|
||||
import logging
|
||||
|
||||
@pytest.fixture
|
||||
def fix(request):
|
||||
logging.warning("log message from setup of {}".format(request.node.name))
|
||||
yield
|
||||
logging.warning("log message from teardown of {}".format(request.node.name))
|
||||
logging.warning("log message from teardown of {}".format(request.node.name))
|
||||
|
||||
def test_log_1(fix):
|
||||
logging.warning("log message from test_log_1")
|
||||
''')
|
||||
testdir.makeini('''
|
||||
[pytest]
|
||||
log_cli=true
|
||||
''')
|
||||
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines([
|
||||
'{}::test_log_1 '.format(filename),
|
||||
'*-- live log start --*',
|
||||
'*WARNING* >>>>> START >>>>>*',
|
||||
'*-- live log setup --*',
|
||||
'*WARNING*log message from setup of test_log_1*',
|
||||
'*-- live log call --*',
|
||||
'*WARNING*log message from test_log_1*',
|
||||
'PASSED *100%*',
|
||||
'*-- live log teardown --*',
|
||||
'*WARNING*log message from teardown of test_log_1*',
|
||||
'*-- live log finish --*',
|
||||
'*WARNING* <<<<< END <<<<<<<*',
|
||||
'*WARNING* <<<<< END <<<<<<<*',
|
||||
'=* 1 passed in *=',
|
||||
])
|
||||
assert re.search(r'(.+)live log teardown(.+)\n(.+)WARNING(.+)\n(.+)WARNING(.+)',
|
||||
result.stdout.str(), re.MULTILINE) is not None
|
||||
assert re.search(r'(.+)live log finish(.+)\n(.+)WARNING(.+)\n(.+)WARNING(.+)',
|
||||
result.stdout.str(), re.MULTILINE) is not None
|
||||
|
||||
|
||||
def test_log_cli_level(testdir):
|
||||
# Default log file level
|
||||
testdir.makepyfile('''
|
||||
|
@ -399,6 +479,48 @@ def test_log_cli_ini_level(testdir):
|
|||
assert result.ret == 0
|
||||
|
||||
|
||||
@pytest.mark.parametrize('cli_args', ['',
|
||||
'--log-level=WARNING',
|
||||
'--log-file-level=WARNING',
|
||||
'--log-cli-level=WARNING'])
|
||||
def test_log_cli_auto_enable(testdir, request, cli_args):
|
||||
"""Check that live logs are enabled if --log-level or --log-cli-level is passed on the CLI.
|
||||
It should not be auto enabled if the same configs are set on the INI file.
|
||||
"""
|
||||
testdir.makepyfile('''
|
||||
import pytest
|
||||
import logging
|
||||
|
||||
def test_log_1():
|
||||
logging.info("log message from test_log_1 not to be shown")
|
||||
logging.warning("log message from test_log_1")
|
||||
|
||||
''')
|
||||
testdir.makeini('''
|
||||
[pytest]
|
||||
log_level=INFO
|
||||
log_cli_level=INFO
|
||||
''')
|
||||
|
||||
result = testdir.runpytest(cli_args)
|
||||
if cli_args == '--log-cli-level=WARNING':
|
||||
result.stdout.fnmatch_lines([
|
||||
'*::test_log_1 ',
|
||||
'*-- live log call --*',
|
||||
'*WARNING*log message from test_log_1*',
|
||||
'PASSED *100%*',
|
||||
'=* 1 passed in *=',
|
||||
])
|
||||
assert 'INFO' not in result.stdout.str()
|
||||
else:
|
||||
result.stdout.fnmatch_lines([
|
||||
'*test_log_cli_auto_enable*100%*',
|
||||
'=* 1 passed in *=',
|
||||
])
|
||||
assert 'INFO' not in result.stdout.str()
|
||||
assert 'WARNING' not in result.stdout.str()
|
||||
|
||||
|
||||
def test_log_file_cli(testdir):
|
||||
# Default log file level
|
||||
testdir.makepyfile('''
|
||||
|
|
|
@ -391,3 +391,25 @@ class TestApprox(object):
|
|||
"""
|
||||
with pytest.raises(TypeError):
|
||||
op(1, approx(1, rel=1e-6, abs=1e-12))
|
||||
|
||||
def test_numpy_array_with_scalar(self):
|
||||
np = pytest.importorskip('numpy')
|
||||
|
||||
actual = np.array([1 + 1e-7, 1 - 1e-8])
|
||||
expected = 1.0
|
||||
|
||||
assert actual == approx(expected, rel=5e-7, abs=0)
|
||||
assert actual != approx(expected, rel=5e-8, abs=0)
|
||||
assert approx(expected, rel=5e-7, abs=0) == actual
|
||||
assert approx(expected, rel=5e-8, abs=0) != actual
|
||||
|
||||
def test_numpy_scalar_with_array(self):
|
||||
np = pytest.importorskip('numpy')
|
||||
|
||||
actual = 1.0
|
||||
expected = np.array([1 + 1e-7, 1 - 1e-8])
|
||||
|
||||
assert actual == approx(expected, rel=5e-7, abs=0)
|
||||
assert actual != approx(expected, rel=5e-8, abs=0)
|
||||
assert approx(expected, rel=5e-7, abs=0) == actual
|
||||
assert approx(expected, rel=5e-8, abs=0) != actual
|
||||
|
|
|
@ -3,7 +3,7 @@ from textwrap import dedent
|
|||
import _pytest._code
|
||||
import pytest
|
||||
from _pytest.pytester import get_public_names
|
||||
from _pytest.fixtures import FixtureLookupError
|
||||
from _pytest.fixtures import FixtureLookupError, FixtureRequest
|
||||
from _pytest import fixtures
|
||||
|
||||
|
||||
|
@ -2281,19 +2281,19 @@ class TestFixtureMarker(object):
|
|||
pass
|
||||
""")
|
||||
result = testdir.runpytest("-vs")
|
||||
result.stdout.fnmatch_lines("""
|
||||
test_class_ordering.py::TestClass2::test_1[1-a] PASSED
|
||||
test_class_ordering.py::TestClass2::test_1[2-a] PASSED
|
||||
test_class_ordering.py::TestClass2::test_2[1-a] PASSED
|
||||
test_class_ordering.py::TestClass2::test_2[2-a] PASSED
|
||||
test_class_ordering.py::TestClass2::test_1[1-b] PASSED
|
||||
test_class_ordering.py::TestClass2::test_1[2-b] PASSED
|
||||
test_class_ordering.py::TestClass2::test_2[1-b] PASSED
|
||||
test_class_ordering.py::TestClass2::test_2[2-b] PASSED
|
||||
test_class_ordering.py::TestClass::test_3[1-a] PASSED
|
||||
test_class_ordering.py::TestClass::test_3[2-a] PASSED
|
||||
test_class_ordering.py::TestClass::test_3[1-b] PASSED
|
||||
test_class_ordering.py::TestClass::test_3[2-b] PASSED
|
||||
result.stdout.re_match_lines(r"""
|
||||
test_class_ordering.py::TestClass2::test_1\[a-1\] PASSED
|
||||
test_class_ordering.py::TestClass2::test_1\[a-2\] PASSED
|
||||
test_class_ordering.py::TestClass2::test_2\[a-1\] PASSED
|
||||
test_class_ordering.py::TestClass2::test_2\[a-2\] PASSED
|
||||
test_class_ordering.py::TestClass2::test_1\[b-1\] PASSED
|
||||
test_class_ordering.py::TestClass2::test_1\[b-2\] PASSED
|
||||
test_class_ordering.py::TestClass2::test_2\[b-1\] PASSED
|
||||
test_class_ordering.py::TestClass2::test_2\[b-2\] PASSED
|
||||
test_class_ordering.py::TestClass::test_3\[a-1\] PASSED
|
||||
test_class_ordering.py::TestClass::test_3\[a-2\] PASSED
|
||||
test_class_ordering.py::TestClass::test_3\[b-1\] PASSED
|
||||
test_class_ordering.py::TestClass::test_3\[b-2\] PASSED
|
||||
""")
|
||||
|
||||
def test_parametrize_separated_order_higher_scope_first(self, testdir):
|
||||
|
@ -3245,3 +3245,188 @@ def test_pytest_fixture_setup_and_post_finalizer_hook(testdir):
|
|||
"*TESTS finalizer hook called for my_fixture from test_func*",
|
||||
"*ROOT finalizer hook called for my_fixture from test_func*",
|
||||
])
|
||||
|
||||
|
||||
class TestScopeOrdering(object):
|
||||
"""Class of tests that ensure fixtures are ordered based on their scopes (#2405)"""
|
||||
|
||||
@pytest.mark.parametrize('use_mark', [True, False])
|
||||
def test_func_closure_module_auto(self, testdir, use_mark):
|
||||
"""Semantically identical to the example posted in #2405 when ``use_mark=True``"""
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
@pytest.fixture(scope='module', autouse={autouse})
|
||||
def m1(): pass
|
||||
|
||||
if {use_mark}:
|
||||
pytestmark = pytest.mark.usefixtures('m1')
|
||||
|
||||
@pytest.fixture(scope='function', autouse=True)
|
||||
def f1(): pass
|
||||
|
||||
def test_func(m1):
|
||||
pass
|
||||
""".format(autouse=not use_mark, use_mark=use_mark))
|
||||
items, _ = testdir.inline_genitems()
|
||||
request = FixtureRequest(items[0])
|
||||
assert request.fixturenames == 'm1 f1'.split()
|
||||
|
||||
def test_func_closure_with_native_fixtures(self, testdir, monkeypatch):
|
||||
"""Sanity check that verifies the order returned by the closures and the actual fixture execution order:
|
||||
The execution order may differ because of fixture inter-dependencies.
|
||||
"""
|
||||
monkeypatch.setattr(pytest, 'FIXTURE_ORDER', [], raising=False)
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
FIXTURE_ORDER = pytest.FIXTURE_ORDER
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def s1():
|
||||
FIXTURE_ORDER.append('s1')
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def m1():
|
||||
FIXTURE_ORDER.append('m1')
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def my_tmpdir_factory():
|
||||
FIXTURE_ORDER.append('my_tmpdir_factory')
|
||||
|
||||
@pytest.fixture
|
||||
def my_tmpdir(my_tmpdir_factory):
|
||||
FIXTURE_ORDER.append('my_tmpdir')
|
||||
|
||||
@pytest.fixture
|
||||
def f1(my_tmpdir):
|
||||
FIXTURE_ORDER.append('f1')
|
||||
|
||||
@pytest.fixture
|
||||
def f2():
|
||||
FIXTURE_ORDER.append('f2')
|
||||
|
||||
def test_foo(f1, m1, f2, s1): pass
|
||||
""")
|
||||
items, _ = testdir.inline_genitems()
|
||||
request = FixtureRequest(items[0])
|
||||
# order of fixtures based on their scope and position in the parameter list
|
||||
assert request.fixturenames == 's1 my_tmpdir_factory m1 f1 f2 my_tmpdir'.split()
|
||||
testdir.runpytest()
|
||||
# actual fixture execution differs: dependent fixtures must be created first ("my_tmpdir")
|
||||
assert pytest.FIXTURE_ORDER == 's1 my_tmpdir_factory m1 my_tmpdir f1 f2'.split()
|
||||
|
||||
def test_func_closure_module(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def m1(): pass
|
||||
|
||||
@pytest.fixture(scope='function')
|
||||
def f1(): pass
|
||||
|
||||
def test_func(f1, m1):
|
||||
pass
|
||||
""")
|
||||
items, _ = testdir.inline_genitems()
|
||||
request = FixtureRequest(items[0])
|
||||
assert request.fixturenames == 'm1 f1'.split()
|
||||
|
||||
def test_func_closure_scopes_reordered(self, testdir):
|
||||
"""Test ensures that fixtures are ordered by scope regardless of the order of the parameters, although
|
||||
fixtures of same scope keep the declared order
|
||||
"""
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def s1(): pass
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def m1(): pass
|
||||
|
||||
@pytest.fixture(scope='function')
|
||||
def f1(): pass
|
||||
|
||||
@pytest.fixture(scope='function')
|
||||
def f2(): pass
|
||||
|
||||
class Test:
|
||||
|
||||
@pytest.fixture(scope='class')
|
||||
def c1(cls): pass
|
||||
|
||||
def test_func(self, f2, f1, c1, m1, s1):
|
||||
pass
|
||||
""")
|
||||
items, _ = testdir.inline_genitems()
|
||||
request = FixtureRequest(items[0])
|
||||
assert request.fixturenames == 's1 m1 c1 f2 f1'.split()
|
||||
|
||||
def test_func_closure_same_scope_closer_root_first(self, testdir):
|
||||
"""Auto-use fixtures of same scope are ordered by closer-to-root first"""
|
||||
testdir.makeconftest("""
|
||||
import pytest
|
||||
|
||||
@pytest.fixture(scope='module', autouse=True)
|
||||
def m_conf(): pass
|
||||
""")
|
||||
testdir.makepyfile(**{
|
||||
'sub/conftest.py': """
|
||||
import pytest
|
||||
|
||||
@pytest.fixture(scope='module', autouse=True)
|
||||
def m_sub(): pass
|
||||
""",
|
||||
'sub/test_func.py': """
|
||||
import pytest
|
||||
|
||||
@pytest.fixture(scope='module', autouse=True)
|
||||
def m_test(): pass
|
||||
|
||||
@pytest.fixture(scope='function')
|
||||
def f1(): pass
|
||||
|
||||
def test_func(m_test, f1):
|
||||
pass
|
||||
"""})
|
||||
items, _ = testdir.inline_genitems()
|
||||
request = FixtureRequest(items[0])
|
||||
assert request.fixturenames == 'm_conf m_sub m_test f1'.split()
|
||||
|
||||
def test_func_closure_all_scopes_complex(self, testdir):
|
||||
"""Complex test involving all scopes and mixing autouse with normal fixtures"""
|
||||
testdir.makeconftest("""
|
||||
import pytest
|
||||
|
||||
@pytest.fixture(scope='session')
|
||||
def s1(): pass
|
||||
""")
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
@pytest.fixture(scope='module', autouse=True)
|
||||
def m1(): pass
|
||||
|
||||
@pytest.fixture(scope='module')
|
||||
def m2(s1): pass
|
||||
|
||||
@pytest.fixture(scope='function')
|
||||
def f1(): pass
|
||||
|
||||
@pytest.fixture(scope='function')
|
||||
def f2(): pass
|
||||
|
||||
class Test:
|
||||
|
||||
@pytest.fixture(scope='class', autouse=True)
|
||||
def c1(self):
|
||||
pass
|
||||
|
||||
def test_func(self, f2, f1, m2):
|
||||
pass
|
||||
""")
|
||||
items, _ = testdir.inline_genitems()
|
||||
request = FixtureRequest(items[0])
|
||||
assert request.fixturenames == 's1 m1 m2 c1 f2 f1'.split()
|
||||
|
|
|
@ -56,7 +56,7 @@ class TestNewAPI(object):
|
|||
assert result.ret == 1
|
||||
result.stdout.fnmatch_lines([
|
||||
"*could not create cache path*",
|
||||
"*1 warnings*",
|
||||
"*2 warnings*",
|
||||
])
|
||||
|
||||
def test_config_cache(self, testdir):
|
||||
|
@ -361,7 +361,7 @@ class TestLastFailed(object):
|
|||
|
||||
result = testdir.runpytest('--lf')
|
||||
result.stdout.fnmatch_lines([
|
||||
'collected 4 items',
|
||||
'collected 4 items / 2 deselected',
|
||||
'run-last-failure: rerun previous 2 failures',
|
||||
'*2 failed, 2 deselected in*',
|
||||
])
|
||||
|
@ -495,15 +495,15 @@ class TestLastFailed(object):
|
|||
# Issue #1342
|
||||
testdir.makepyfile(test_empty='')
|
||||
testdir.runpytest('-q', '--lf')
|
||||
assert not os.path.exists('.pytest_cache')
|
||||
assert not os.path.exists('.pytest_cache/v/cache/lastfailed')
|
||||
|
||||
testdir.makepyfile(test_successful='def test_success():\n assert True')
|
||||
testdir.runpytest('-q', '--lf')
|
||||
assert not os.path.exists('.pytest_cache')
|
||||
assert not os.path.exists('.pytest_cache/v/cache/lastfailed')
|
||||
|
||||
testdir.makepyfile(test_errored='def test_error():\n assert False')
|
||||
testdir.runpytest('-q', '--lf')
|
||||
assert os.path.exists('.pytest_cache')
|
||||
assert os.path.exists('.pytest_cache/v/cache/lastfailed')
|
||||
|
||||
def test_xfail_not_considered_failure(self, testdir):
|
||||
testdir.makepyfile('''
|
||||
|
@ -603,3 +603,142 @@ class TestLastFailed(object):
|
|||
result = testdir.runpytest('--last-failed')
|
||||
result.stdout.fnmatch_lines('*4 passed*')
|
||||
assert self.get_cached_last_failed(testdir) == []
|
||||
|
||||
def test_lastfailed_no_failures_behavior_all_passed(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
def test_1():
|
||||
assert True
|
||||
def test_2():
|
||||
assert True
|
||||
""")
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines(["*2 passed*"])
|
||||
result = testdir.runpytest("--lf")
|
||||
result.stdout.fnmatch_lines(["*2 passed*"])
|
||||
result = testdir.runpytest("--lf", "--lfnf", "all")
|
||||
result.stdout.fnmatch_lines(["*2 passed*"])
|
||||
result = testdir.runpytest("--lf", "--lfnf", "none")
|
||||
result.stdout.fnmatch_lines(["*2 desel*"])
|
||||
|
||||
def test_lastfailed_no_failures_behavior_empty_cache(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
def test_1():
|
||||
assert True
|
||||
def test_2():
|
||||
assert False
|
||||
""")
|
||||
result = testdir.runpytest("--lf", "--cache-clear")
|
||||
result.stdout.fnmatch_lines(["*1 failed*1 passed*"])
|
||||
result = testdir.runpytest("--lf", "--cache-clear", "--lfnf", "all")
|
||||
result.stdout.fnmatch_lines(["*1 failed*1 passed*"])
|
||||
result = testdir.runpytest("--lf", "--cache-clear", "--lfnf", "none")
|
||||
result.stdout.fnmatch_lines(["*2 desel*"])
|
||||
|
||||
|
||||
class TestNewFirst(object):
|
||||
def test_newfirst_usecase(self, testdir):
|
||||
testdir.makepyfile(**{
|
||||
'test_1/test_1.py': '''
|
||||
def test_1(): assert 1
|
||||
def test_2(): assert 1
|
||||
def test_3(): assert 1
|
||||
''',
|
||||
'test_2/test_2.py': '''
|
||||
def test_1(): assert 1
|
||||
def test_2(): assert 1
|
||||
def test_3(): assert 1
|
||||
'''
|
||||
})
|
||||
|
||||
testdir.tmpdir.join('test_1/test_1.py').setmtime(1)
|
||||
|
||||
result = testdir.runpytest("-v")
|
||||
result.stdout.fnmatch_lines([
|
||||
"*test_1/test_1.py::test_1 PASSED*",
|
||||
"*test_1/test_1.py::test_2 PASSED*",
|
||||
"*test_1/test_1.py::test_3 PASSED*",
|
||||
"*test_2/test_2.py::test_1 PASSED*",
|
||||
"*test_2/test_2.py::test_2 PASSED*",
|
||||
"*test_2/test_2.py::test_3 PASSED*",
|
||||
])
|
||||
|
||||
result = testdir.runpytest("-v", "--nf")
|
||||
|
||||
result.stdout.fnmatch_lines([
|
||||
"*test_2/test_2.py::test_1 PASSED*",
|
||||
"*test_2/test_2.py::test_2 PASSED*",
|
||||
"*test_2/test_2.py::test_3 PASSED*",
|
||||
"*test_1/test_1.py::test_1 PASSED*",
|
||||
"*test_1/test_1.py::test_2 PASSED*",
|
||||
"*test_1/test_1.py::test_3 PASSED*",
|
||||
])
|
||||
|
||||
testdir.tmpdir.join("test_1/test_1.py").write(
|
||||
"def test_1(): assert 1\n"
|
||||
"def test_2(): assert 1\n"
|
||||
"def test_3(): assert 1\n"
|
||||
"def test_4(): assert 1\n"
|
||||
)
|
||||
testdir.tmpdir.join('test_1/test_1.py').setmtime(1)
|
||||
|
||||
result = testdir.runpytest("-v", "--nf")
|
||||
|
||||
result.stdout.fnmatch_lines([
|
||||
"*test_1/test_1.py::test_4 PASSED*",
|
||||
"*test_2/test_2.py::test_1 PASSED*",
|
||||
"*test_2/test_2.py::test_2 PASSED*",
|
||||
"*test_2/test_2.py::test_3 PASSED*",
|
||||
"*test_1/test_1.py::test_1 PASSED*",
|
||||
"*test_1/test_1.py::test_2 PASSED*",
|
||||
"*test_1/test_1.py::test_3 PASSED*",
|
||||
])
|
||||
|
||||
def test_newfirst_parametrize(self, testdir):
|
||||
testdir.makepyfile(**{
|
||||
'test_1/test_1.py': '''
|
||||
import pytest
|
||||
@pytest.mark.parametrize('num', [1, 2])
|
||||
def test_1(num): assert num
|
||||
''',
|
||||
'test_2/test_2.py': '''
|
||||
import pytest
|
||||
@pytest.mark.parametrize('num', [1, 2])
|
||||
def test_1(num): assert num
|
||||
'''
|
||||
})
|
||||
|
||||
testdir.tmpdir.join('test_1/test_1.py').setmtime(1)
|
||||
|
||||
result = testdir.runpytest("-v")
|
||||
result.stdout.fnmatch_lines([
|
||||
"*test_1/test_1.py::test_1[1*",
|
||||
"*test_1/test_1.py::test_1[2*",
|
||||
"*test_2/test_2.py::test_1[1*",
|
||||
"*test_2/test_2.py::test_1[2*"
|
||||
])
|
||||
|
||||
result = testdir.runpytest("-v", "--nf")
|
||||
|
||||
result.stdout.fnmatch_lines([
|
||||
"*test_2/test_2.py::test_1[1*",
|
||||
"*test_2/test_2.py::test_1[2*",
|
||||
"*test_1/test_1.py::test_1[1*",
|
||||
"*test_1/test_1.py::test_1[2*",
|
||||
])
|
||||
|
||||
testdir.tmpdir.join("test_1/test_1.py").write(
|
||||
"import pytest\n"
|
||||
"@pytest.mark.parametrize('num', [1, 2, 3])\n"
|
||||
"def test_1(num): assert num\n"
|
||||
)
|
||||
testdir.tmpdir.join('test_1/test_1.py').setmtime(1)
|
||||
|
||||
result = testdir.runpytest("-v", "--nf")
|
||||
|
||||
result.stdout.fnmatch_lines([
|
||||
"*test_1/test_1.py::test_1[3*",
|
||||
"*test_2/test_2.py::test_1[1*",
|
||||
"*test_2/test_2.py::test_1[2*",
|
||||
"*test_1/test_1.py::test_1[1*",
|
||||
"*test_1/test_1.py::test_1[2*",
|
||||
])
|
||||
|
|
|
@ -756,6 +756,27 @@ class TestDoctestSkips(object):
|
|||
reprec = testdir.inline_run("--doctest-modules")
|
||||
reprec.assertoutcome(passed=0, skipped=0)
|
||||
|
||||
def test_continue_on_failure(self, testdir):
|
||||
testdir.maketxtfile(test_something="""
|
||||
>>> i = 5
|
||||
>>> def foo():
|
||||
... raise ValueError('error1')
|
||||
>>> foo()
|
||||
>>> i
|
||||
>>> i + 2
|
||||
7
|
||||
>>> i + 1
|
||||
""")
|
||||
result = testdir.runpytest("--doctest-modules", "--doctest-continue-on-failure")
|
||||
result.assert_outcomes(passed=0, failed=1)
|
||||
# The lines that contains the failure are 4, 5, and 8. The first one
|
||||
# is a stack trace and the other two are mismatches.
|
||||
result.stdout.fnmatch_lines([
|
||||
"*4: UnexpectedException*",
|
||||
"*5: DocTestFailure*",
|
||||
"*8: DocTestFailure*",
|
||||
])
|
||||
|
||||
|
||||
class TestDoctestAutoUseFixtures(object):
|
||||
|
||||
|
|
|
@ -328,23 +328,28 @@ class TestPython(object):
|
|||
fnode.assert_attr(message="internal error")
|
||||
assert "Division" in fnode.toxml()
|
||||
|
||||
def test_failure_function(self, testdir):
|
||||
@pytest.mark.parametrize('junit_logging', ['no', 'system-out', 'system-err'])
|
||||
def test_failure_function(self, testdir, junit_logging):
|
||||
testdir.makepyfile("""
|
||||
import logging
|
||||
import sys
|
||||
|
||||
def test_fail():
|
||||
print ("hello-stdout")
|
||||
sys.stderr.write("hello-stderr\\n")
|
||||
logging.info('info msg')
|
||||
logging.warning('warning msg')
|
||||
raise ValueError(42)
|
||||
""")
|
||||
|
||||
result, dom = runandparse(testdir)
|
||||
result, dom = runandparse(testdir, '-o', 'junit_logging=%s' % junit_logging)
|
||||
assert result.ret
|
||||
node = dom.find_first_by_tag("testsuite")
|
||||
node.assert_attr(failures=1, tests=1)
|
||||
tnode = node.find_first_by_tag("testcase")
|
||||
tnode.assert_attr(
|
||||
file="test_failure_function.py",
|
||||
line="1",
|
||||
line="3",
|
||||
classname="test_failure_function",
|
||||
name="test_fail")
|
||||
fnode = tnode.find_first_by_tag("failure")
|
||||
|
@ -353,9 +358,21 @@ class TestPython(object):
|
|||
systemout = fnode.next_siebling
|
||||
assert systemout.tag == "system-out"
|
||||
assert "hello-stdout" in systemout.toxml()
|
||||
assert "info msg" not in systemout.toxml()
|
||||
systemerr = systemout.next_siebling
|
||||
assert systemerr.tag == "system-err"
|
||||
assert "hello-stderr" in systemerr.toxml()
|
||||
assert "info msg" not in systemerr.toxml()
|
||||
|
||||
if junit_logging == 'system-out':
|
||||
assert "warning msg" in systemout.toxml()
|
||||
assert "warning msg" not in systemerr.toxml()
|
||||
elif junit_logging == 'system-err':
|
||||
assert "warning msg" not in systemout.toxml()
|
||||
assert "warning msg" in systemerr.toxml()
|
||||
elif junit_logging == 'no':
|
||||
assert "warning msg" not in systemout.toxml()
|
||||
assert "warning msg" not in systemerr.toxml()
|
||||
|
||||
def test_failure_verbose_message(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
|
@ -846,10 +863,10 @@ def test_record_property(testdir):
|
|||
import pytest
|
||||
|
||||
@pytest.fixture
|
||||
def other(record_xml_property):
|
||||
record_xml_property("bar", 1)
|
||||
def test_record(record_xml_property, other):
|
||||
record_xml_property("foo", "<1");
|
||||
def other(record_property):
|
||||
record_property("bar", 1)
|
||||
def test_record(record_property, other):
|
||||
record_property("foo", "<1");
|
||||
""")
|
||||
result, dom = runandparse(testdir, '-rw')
|
||||
node = dom.find_first_by_tag("testsuite")
|
||||
|
@ -860,15 +877,15 @@ def test_record_property(testdir):
|
|||
pnodes[1].assert_attr(name="foo", value="<1")
|
||||
result.stdout.fnmatch_lines([
|
||||
'test_record_property.py::test_record',
|
||||
'*record_xml_property*experimental*',
|
||||
'*record_property*experimental*',
|
||||
])
|
||||
|
||||
|
||||
def test_record_property_same_name(testdir):
|
||||
testdir.makepyfile("""
|
||||
def test_record_with_same_name(record_xml_property):
|
||||
record_xml_property("foo", "bar")
|
||||
record_xml_property("foo", "baz")
|
||||
def test_record_with_same_name(record_property):
|
||||
record_property("foo", "bar")
|
||||
record_property("foo", "baz")
|
||||
""")
|
||||
result, dom = runandparse(testdir, '-rw')
|
||||
node = dom.find_first_by_tag("testsuite")
|
||||
|
|
|
@ -187,6 +187,42 @@ class TestPDB(object):
|
|||
assert "captured stderr" not in output
|
||||
self.flush(child)
|
||||
|
||||
@pytest.mark.parametrize('showcapture', ['all', 'no', 'log'])
|
||||
def test_pdb_print_captured_logs(self, testdir, showcapture):
|
||||
p1 = testdir.makepyfile("""
|
||||
def test_1():
|
||||
import logging
|
||||
logging.warn("get " + "rekt")
|
||||
assert False
|
||||
""")
|
||||
child = testdir.spawn_pytest("--show-capture=%s --pdb %s" % (showcapture, p1))
|
||||
if showcapture in ('all', 'log'):
|
||||
child.expect("captured log")
|
||||
child.expect("get rekt")
|
||||
child.expect("(Pdb)")
|
||||
child.sendeof()
|
||||
rest = child.read().decode("utf8")
|
||||
assert "1 failed" in rest
|
||||
self.flush(child)
|
||||
|
||||
def test_pdb_print_captured_logs_nologging(self, testdir):
|
||||
p1 = testdir.makepyfile("""
|
||||
def test_1():
|
||||
import logging
|
||||
logging.warn("get " + "rekt")
|
||||
assert False
|
||||
""")
|
||||
child = testdir.spawn_pytest("--show-capture=all --pdb "
|
||||
"-p no:logging %s" % p1)
|
||||
child.expect("get rekt")
|
||||
output = child.before.decode("utf8")
|
||||
assert "captured log" not in output
|
||||
child.expect("(Pdb)")
|
||||
child.sendeof()
|
||||
rest = child.read().decode("utf8")
|
||||
assert "1 failed" in rest
|
||||
self.flush(child)
|
||||
|
||||
def test_pdb_interaction_exception(self, testdir):
|
||||
p1 = testdir.makepyfile("""
|
||||
import pytest
|
||||
|
|
|
@ -13,7 +13,7 @@ def test_generic_path(testdir):
|
|||
from _pytest.main import Session
|
||||
config = testdir.parseconfig()
|
||||
session = Session(config)
|
||||
p1 = Node('a', config=config, session=session)
|
||||
p1 = Node('a', config=config, session=session, nodeid='a')
|
||||
# assert p1.fspath is None
|
||||
p2 = Node('B', parent=p1)
|
||||
p3 = Node('()', parent=p2)
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
from __future__ import absolute_import, division, print_function
|
||||
|
||||
import pytest
|
||||
|
||||
from _pytest.main import EXIT_NOTESTSCOLLECTED
|
||||
|
@ -239,6 +240,20 @@ def test_exclude(testdir):
|
|||
result.stdout.fnmatch_lines(["*1 passed*"])
|
||||
|
||||
|
||||
def test_deselect(testdir):
|
||||
testdir.makepyfile(test_a="""
|
||||
import pytest
|
||||
def test_a1(): pass
|
||||
@pytest.mark.parametrize('b', range(3))
|
||||
def test_a2(b): pass
|
||||
""")
|
||||
result = testdir.runpytest("-v", "--deselect=test_a.py::test_a2[1]", "--deselect=test_a.py::test_a2[2]")
|
||||
assert result.ret == 0
|
||||
result.stdout.fnmatch_lines(["*2 passed, 2 deselected*"])
|
||||
for line in result.stdout.lines:
|
||||
assert not line.startswith(('test_a.py::test_a2[1]', 'test_a.py::test_a2[2]'))
|
||||
|
||||
|
||||
def test_sessionfinish_with_start(testdir):
|
||||
testdir.makeconftest("""
|
||||
import os
|
||||
|
@ -253,3 +268,32 @@ def test_sessionfinish_with_start(testdir):
|
|||
""")
|
||||
res = testdir.runpytest("--collect-only")
|
||||
assert res.ret == EXIT_NOTESTSCOLLECTED
|
||||
|
||||
|
||||
@pytest.mark.parametrize("path", ["root", "{relative}/root", "{environment}/root"])
|
||||
def test_rootdir_option_arg(testdir, monkeypatch, path):
|
||||
monkeypatch.setenv('PY_ROOTDIR_PATH', str(testdir.tmpdir))
|
||||
path = path.format(relative=str(testdir.tmpdir),
|
||||
environment='$PY_ROOTDIR_PATH')
|
||||
|
||||
rootdir = testdir.mkdir("root")
|
||||
rootdir.mkdir("tests")
|
||||
testdir.makepyfile("""
|
||||
import os
|
||||
def test_one():
|
||||
assert 1
|
||||
""")
|
||||
|
||||
result = testdir.runpytest("--rootdir={}".format(path))
|
||||
result.stdout.fnmatch_lines(['*rootdir: {}/root, inifile:*'.format(testdir.tmpdir), "*1 passed*"])
|
||||
|
||||
|
||||
def test_rootdir_wrong_option_arg(testdir):
|
||||
testdir.makepyfile("""
|
||||
import os
|
||||
def test_one():
|
||||
assert 1
|
||||
""")
|
||||
|
||||
result = testdir.runpytest("--rootdir=wrong_dir")
|
||||
result.stderr.fnmatch_lines(["*Directory *wrong_dir* not found. Check your '--rootdir' option.*"])
|
||||
|
|
|
@ -156,6 +156,21 @@ class TestXFail(object):
|
|||
assert callreport.passed
|
||||
assert callreport.wasxfail == "this is an xfail"
|
||||
|
||||
def test_xfail_using_platform(self, testdir):
|
||||
"""
|
||||
Verify that platform can be used with xfail statements.
|
||||
"""
|
||||
item = testdir.getitem("""
|
||||
import pytest
|
||||
@pytest.mark.xfail("platform.platform() == platform.platform()")
|
||||
def test_func():
|
||||
assert 0
|
||||
""")
|
||||
reports = runtestprotocol(item, log=False)
|
||||
assert len(reports) == 3
|
||||
callreport = reports[1]
|
||||
assert callreport.wasxfail
|
||||
|
||||
def test_xfail_xpassed_strict(self, testdir):
|
||||
item = testdir.getitem("""
|
||||
import pytest
|
||||
|
@ -612,6 +627,16 @@ class TestSkipif(object):
|
|||
])
|
||||
assert result.ret == 0
|
||||
|
||||
def test_skipif_using_platform(self, testdir):
|
||||
item = testdir.getitem("""
|
||||
import pytest
|
||||
@pytest.mark.skipif("platform.platform() == platform.platform()")
|
||||
def test_func():
|
||||
pass
|
||||
""")
|
||||
pytest.raises(pytest.skip.Exception, lambda:
|
||||
pytest_runtest_setup(item))
|
||||
|
||||
@pytest.mark.parametrize('marker, msg1, msg2', [
|
||||
('skipif', 'SKIP', 'skipped'),
|
||||
('xfail', 'XPASS', 'xpassed'),
|
||||
|
@ -1065,3 +1090,18 @@ def test_mark_xfail_item(testdir):
|
|||
assert not failed
|
||||
xfailed = [r for r in skipped if hasattr(r, 'wasxfail')]
|
||||
assert xfailed
|
||||
|
||||
|
||||
def test_summary_list_after_errors(testdir):
|
||||
"""Ensure the list of errors/fails/xfails/skips appears after tracebacks in terminal reporting."""
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
def test_fail():
|
||||
assert 0
|
||||
""")
|
||||
result = testdir.runpytest('-ra')
|
||||
result.stdout.fnmatch_lines([
|
||||
'=* FAILURES *=',
|
||||
'*= short test summary info =*',
|
||||
'FAIL test_summary_list_after_errors.py::test_fail',
|
||||
])
|
||||
|
|
|
@ -32,16 +32,19 @@ class Option(object):
|
|||
return values
|
||||
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
if "option" in metafunc.fixturenames:
|
||||
metafunc.addcall(id="default",
|
||||
funcargs={'option': Option(verbose=False)})
|
||||
metafunc.addcall(id="verbose",
|
||||
funcargs={'option': Option(verbose=True)})
|
||||
metafunc.addcall(id="quiet",
|
||||
funcargs={'option': Option(verbose=-1)})
|
||||
metafunc.addcall(id="fulltrace",
|
||||
funcargs={'option': Option(fulltrace=True)})
|
||||
@pytest.fixture(params=[
|
||||
Option(verbose=False),
|
||||
Option(verbose=True),
|
||||
Option(verbose=-1),
|
||||
Option(fulltrace=True),
|
||||
], ids=[
|
||||
"default",
|
||||
"verbose",
|
||||
"quiet",
|
||||
"fulltrace",
|
||||
])
|
||||
def option(request):
|
||||
return request.param
|
||||
|
||||
|
||||
@pytest.mark.parametrize('input,expected', [
|
||||
|
@ -431,11 +434,36 @@ class TestTerminalFunctional(object):
|
|||
)
|
||||
result = testdir.runpytest("-k", "test_two:", testpath)
|
||||
result.stdout.fnmatch_lines([
|
||||
"collected 3 items / 1 deselected",
|
||||
"*test_deselected.py ..*",
|
||||
"=* 1 test*deselected *=",
|
||||
])
|
||||
assert result.ret == 0
|
||||
|
||||
def test_show_deselected_items_using_markexpr_before_test_execution(
|
||||
self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
@pytest.mark.foo
|
||||
def test_foobar():
|
||||
pass
|
||||
|
||||
@pytest.mark.bar
|
||||
def test_bar():
|
||||
pass
|
||||
|
||||
def test_pass():
|
||||
pass
|
||||
""")
|
||||
result = testdir.runpytest('-m', 'not foo')
|
||||
result.stdout.fnmatch_lines([
|
||||
"collected 3 items / 1 deselected",
|
||||
"*test_show_des*.py ..*",
|
||||
"*= 2 passed, 1 deselected in * =*",
|
||||
])
|
||||
assert "= 1 deselected =" not in result.stdout.str()
|
||||
assert result.ret == 0
|
||||
|
||||
def test_no_skip_summary_if_failure(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
@ -657,10 +685,12 @@ def test_color_yes_collection_on_non_atty(testdir, verbose):
|
|||
|
||||
|
||||
def test_getreportopt():
|
||||
class config(object):
|
||||
class option(object):
|
||||
class Config(object):
|
||||
class Option(object):
|
||||
reportchars = ""
|
||||
disable_warnings = True
|
||||
option = Option()
|
||||
config = Config()
|
||||
|
||||
config.option.reportchars = "sf"
|
||||
assert getreportopt(config) == "sf"
|
||||
|
@ -823,6 +853,51 @@ def pytest_report_header(config, startdir):
|
|||
str(testdir.tmpdir),
|
||||
])
|
||||
|
||||
def test_show_capture(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import sys
|
||||
import logging
|
||||
def test_one():
|
||||
sys.stdout.write('!This is stdout!')
|
||||
sys.stderr.write('!This is stderr!')
|
||||
logging.warning('!This is a warning log msg!')
|
||||
assert False, 'Something failed'
|
||||
""")
|
||||
|
||||
result = testdir.runpytest("--tb=short")
|
||||
result.stdout.fnmatch_lines(["!This is stdout!",
|
||||
"!This is stderr!",
|
||||
"*WARNING*!This is a warning log msg!"])
|
||||
|
||||
result = testdir.runpytest("--show-capture=all", "--tb=short")
|
||||
result.stdout.fnmatch_lines(["!This is stdout!",
|
||||
"!This is stderr!",
|
||||
"*WARNING*!This is a warning log msg!"])
|
||||
|
||||
stdout = testdir.runpytest(
|
||||
"--show-capture=stdout", "--tb=short").stdout.str()
|
||||
assert "!This is stderr!" not in stdout
|
||||
assert "!This is stdout!" in stdout
|
||||
assert "!This is a warning log msg!" not in stdout
|
||||
|
||||
stdout = testdir.runpytest(
|
||||
"--show-capture=stderr", "--tb=short").stdout.str()
|
||||
assert "!This is stdout!" not in stdout
|
||||
assert "!This is stderr!" in stdout
|
||||
assert "!This is a warning log msg!" not in stdout
|
||||
|
||||
stdout = testdir.runpytest(
|
||||
"--show-capture=log", "--tb=short").stdout.str()
|
||||
assert "!This is stdout!" not in stdout
|
||||
assert "!This is stderr!" not in stdout
|
||||
assert "!This is a warning log msg!" in stdout
|
||||
|
||||
stdout = testdir.runpytest(
|
||||
"--show-capture=no", "--tb=short").stdout.str()
|
||||
assert "!This is stdout!" not in stdout
|
||||
assert "!This is stderr!" not in stdout
|
||||
assert "!This is a warning log msg!" not in stdout
|
||||
|
||||
|
||||
@pytest.mark.xfail("not hasattr(os, 'dup')")
|
||||
def test_fdopen_kept_alive_issue124(testdir):
|
||||
|
|
Loading…
Reference in New Issue