Compare commits
132 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
a056b41070 | ||
|
|
6d26c44895 | ||
|
|
1644cd2da5 | ||
|
|
98135a3d30 | ||
|
|
307a41339c | ||
|
|
72ebd74715 | ||
|
|
bfa53811d3 | ||
|
|
0fa77d58c4 | ||
|
|
fa80b8ad17 | ||
|
|
9cdb6fc724 | ||
|
|
cd8e69e33c | ||
|
|
7b87f7b6b5 | ||
|
|
dd0da4643a | ||
|
|
7766526992 | ||
|
|
5c3d692008 | ||
|
|
bdf9147ad4 | ||
|
|
ad2ac256de | ||
|
|
a4466342ae | ||
|
|
0d7af592c0 | ||
|
|
66ffc5e0f8 | ||
|
|
320137a4aa | ||
|
|
0278dc9b6f | ||
|
|
7d9297e929 | ||
|
|
9e03ea8215 | ||
|
|
60f5b15f20 | ||
|
|
e67047d629 | ||
|
|
10edfa65dc | ||
|
|
daec4c70b8 | ||
|
|
dbfbc2b222 | ||
|
|
426907eafb | ||
|
|
4f0879ff9b | ||
|
|
bec6ee5c29 | ||
|
|
23fa4cec61 | ||
|
|
4b9dbd3920 | ||
|
|
98c6ced46e | ||
|
|
cb485e5af4 | ||
|
|
817b175870 | ||
|
|
bd320951e6 | ||
|
|
0cfd873abe | ||
|
|
d30ad3f5ce | ||
|
|
5dbf4fc0c2 | ||
|
|
e3a945a0b5 | ||
|
|
a5c075c4e2 | ||
|
|
c0dd7c5975 | ||
|
|
4031588811 | ||
|
|
1dc2a45cb2 | ||
|
|
40b172ca5a | ||
|
|
94031f9cef | ||
|
|
a6783cd6f3 | ||
|
|
438d85b5ad | ||
|
|
90b6ccd321 | ||
|
|
db778fd456 | ||
|
|
08f3a0791d | ||
|
|
663f824fc4 | ||
|
|
2700a94d49 | ||
|
|
e31f40c2d0 | ||
|
|
fc073cb81c | ||
|
|
2e90aaf7af | ||
|
|
238b890d9b | ||
|
|
49119e31bf | ||
|
|
bb5f1e8173 | ||
|
|
05fbd490da | ||
|
|
5322f057a0 | ||
|
|
086cc03f05 | ||
|
|
d67514b657 | ||
|
|
a467fbea0d | ||
|
|
6686c67a41 | ||
|
|
73f36fc8b7 | ||
|
|
bd8a2cc18c | ||
|
|
6d1b7e94d1 | ||
|
|
9eff939b02 | ||
|
|
0a8b27ff49 | ||
|
|
72752165df | ||
|
|
9b21d3f206 | ||
|
|
dde0a81677 | ||
|
|
31576fac61 | ||
|
|
82846777a7 | ||
|
|
7f49e0fddc | ||
|
|
eda8b02a8d | ||
|
|
1fd1617427 | ||
|
|
97252a8b66 | ||
|
|
4a1cc792c9 | ||
|
|
581b3a110c | ||
|
|
e118682db1 | ||
|
|
4eeb1c4f31 | ||
|
|
e2c4730e17 | ||
|
|
c3e844e561 | ||
|
|
fde947e1a8 | ||
|
|
ce0af892aa | ||
|
|
3f389238f8 | ||
|
|
1faf95273c | ||
|
|
ba5d4ae42f | ||
|
|
d18124f5ed | ||
|
|
846cf781a1 | ||
|
|
85dd51ccc8 | ||
|
|
ded88700a3 | ||
|
|
a9d1f40c29 | ||
|
|
e2d19aab39 | ||
|
|
7210e443ee | ||
|
|
f674c57d1a | ||
|
|
9dec27371d | ||
|
|
75328b66e6 | ||
|
|
cf9d345382 | ||
|
|
0d8392bc45 | ||
|
|
47d2d20d81 | ||
|
|
b0a5740898 | ||
|
|
bc8c4b3ebd | ||
|
|
2eebe6c677 | ||
|
|
8c326c5e66 | ||
|
|
8e9034f074 | ||
|
|
612fb96d02 | ||
|
|
49d067d72e | ||
|
|
8e1301b6d7 | ||
|
|
8ac5af2896 | ||
|
|
8550ea0728 | ||
|
|
6e619e0a70 | ||
|
|
5b2b71bfd4 | ||
|
|
d92322a574 | ||
|
|
7e793b9419 | ||
|
|
d81b703f10 | ||
|
|
1265cb9952 | ||
|
|
124e58e42d | ||
|
|
2697b63bcd | ||
|
|
ee5b836e27 | ||
|
|
a4c17dfb19 | ||
|
|
00c0d62c9b | ||
|
|
a5d4c20905 | ||
|
|
0335c6d750 | ||
|
|
8b6e42317b | ||
|
|
56e6ae567c | ||
|
|
33b663e03d | ||
|
|
4bfbe7ec22 |
1
.hgtags
1
.hgtags
@@ -62,3 +62,4 @@ b93ac0cdae02effaa3c136a681cc45bba757fe46 1.4.14
|
||||
0000000000000000000000000000000000000000 1.4.14
|
||||
af860de70cc3f157ac34ca1d4bf557a057bff775 2.4.0
|
||||
8828c924acae0b4cad2e2cb92943d51da7cb744a 2.4.1
|
||||
8d051f89184bfa3033f5e59819dff9f32a612941 2.4.2
|
||||
|
||||
1
AUTHORS
1
AUTHORS
@@ -35,3 +35,4 @@ Brian Okken
|
||||
Katarzyna Jachim
|
||||
Christian Theunert
|
||||
Anthon van der Neut
|
||||
Mark Abramowitz
|
||||
|
||||
136
CHANGELOG
136
CHANGELOG
@@ -1,3 +1,139 @@
|
||||
2.5.0
|
||||
-----------------------------------
|
||||
|
||||
- dropped python2.5 from automated release testing of pytest itself
|
||||
which means it's probably going to break soon (but still works
|
||||
with this release we believe).
|
||||
|
||||
- simplified and fixed implementation for calling finalizers when
|
||||
parametrized fixtures or function arguments are involved. finalization
|
||||
is now performed lazily at setup time instead of in the "teardown phase".
|
||||
While this might sound odd at first, it helps to ensure that we are
|
||||
correctly handling setup/teardown even in complex code. User-level code
|
||||
should not be affected unless it's implementing the pytest_runtest_teardown
|
||||
hook and expecting certain fixture instances are torn down within (very
|
||||
unlikely and would have been unreliable anyway).
|
||||
|
||||
- PR90: add --color=yes|no|auto option to force terminal coloring
|
||||
mode ("auto" is default). Thanks Marc Abramowitz.
|
||||
|
||||
- fix issue319 - correctly show unicode in assertion errors. Many
|
||||
thanks to Floris Bruynooghe for the complete PR. Also means
|
||||
we depend on py>=1.4.19 now.
|
||||
|
||||
- fix issue396 - correctly sort and finalize class-scoped parametrized
|
||||
tests independently from number of methods on the class.
|
||||
|
||||
- refix issue323 in a better way -- parametrization should now never
|
||||
cause Runtime Recursion errors because the underlying algorithm
|
||||
for re-ordering tests per-scope/per-fixture is not recursive
|
||||
anymore (it was tail-call recursive before which could lead
|
||||
to problems for more than >966 non-function scoped parameters).
|
||||
|
||||
- fix issue290 - there is preliminary support now for parametrizing
|
||||
with repeated same values (sometimes useful to to test if calling
|
||||
a second time works as with the first time).
|
||||
|
||||
- close issue240 - document precisely how pytest module importing
|
||||
works, discuss the two common test directory layouts, and how it
|
||||
interacts with PEP420-namespace packages.
|
||||
|
||||
- fix issue246 fix finalizer order to be LIFO on independent fixtures
|
||||
depending on a parametrized higher-than-function scoped fixture.
|
||||
(was quite some effort so please bear with the complexity of this sentence :)
|
||||
Thanks Ralph Schmitt for the precise failure example.
|
||||
|
||||
- fix issue244 by implementing special index for parameters to only use
|
||||
indices for paramentrized test ids
|
||||
|
||||
- fix issue287 by running all finalizers but saving the exception
|
||||
from the first failing finalizer and re-raising it so teardown will
|
||||
still have failed. We reraise the first failing exception because
|
||||
it might be the cause for other finalizers to fail.
|
||||
|
||||
- fix ordering when mock.patch or other standard decorator-wrappings
|
||||
are used with test methods. This fixues issue346 and should
|
||||
help with random "xdist" collection failures. Thanks to
|
||||
Ronny Pfannschmidt and Donald Stufft for helping to isolate it.
|
||||
|
||||
- fix issue357 - special case "-k" expressions to allow for
|
||||
filtering with simple strings that are not valid python expressions.
|
||||
Examples: "-k 1.3" matches all tests parametrized with 1.3.
|
||||
"-k None" filters all tests that have "None" in their name
|
||||
and conversely "-k 'not None'".
|
||||
Previously these examples would raise syntax errors.
|
||||
|
||||
- fix issue384 by removing the trial support code
|
||||
since the unittest compat enhancements allow
|
||||
trial to handle it on its own
|
||||
|
||||
- don't hide an ImportError when importing a plugin produces one.
|
||||
fixes issue375.
|
||||
|
||||
- fix issue275 - allow usefixtures and autouse fixtures
|
||||
for running doctest text files.
|
||||
|
||||
- fix issue380 by making --resultlog only rely on longrepr instead
|
||||
of the "reprcrash" attribute which only exists sometimes.
|
||||
|
||||
- address issue122: allow @pytest.fixture(params=iterator) by exploding
|
||||
into a list early on.
|
||||
|
||||
- fix pexpect-3.0 compatibility for pytest's own tests.
|
||||
(fixes issue386)
|
||||
|
||||
- allow nested parametrize-value markers, thanks James Lan for the PR.
|
||||
|
||||
- fix unicode handling with new monkeypatch.setattr(import_path, value)
|
||||
API. Thanks Rob Dennis. Fixes issue371.
|
||||
|
||||
- fix unicode handling with junitxml, fixes issue368.
|
||||
|
||||
- In assertion rewriting mode on Python 2, fix the detection of coding
|
||||
cookies. See issue #330.
|
||||
|
||||
- make "--runxfail" turn imperative pytest.xfail calls into no ops
|
||||
(it already did neutralize pytest.mark.xfail markers)
|
||||
|
||||
- refine pytest / pkg_resources interactions: The AssertionRewritingHook
|
||||
PEP302 compliant loader now registers itself with setuptools/pkg_resources
|
||||
properly so that the pkg_resources.resource_stream method works properly.
|
||||
Fixes issue366. Thanks for the investigations and full PR to Jason R. Coombs.
|
||||
|
||||
- pytestconfig fixture is now session-scoped as it is the same object during the
|
||||
whole test run. Fixes issue370.
|
||||
|
||||
- avoid one surprising case of marker malfunction/confusion::
|
||||
|
||||
@pytest.mark.some(lambda arg: ...)
|
||||
def test_function():
|
||||
|
||||
would not work correctly because pytest assumes @pytest.mark.some
|
||||
gets a function to be decorated already. We now at least detect if this
|
||||
arg is an lambda and thus the example will work. Thanks Alex Gaynor
|
||||
for bringing it up.
|
||||
|
||||
- xfail a test on pypy that checks wrong encoding/ascii (pypy does
|
||||
not error out). fixes issue385.
|
||||
|
||||
- internally make varnames() deal with classes's __init__,
|
||||
although it's not needed by pytest itself atm. Also
|
||||
fix caching. Fixes issue376.
|
||||
|
||||
- fix issue221 - handle importing of namespace-package with no
|
||||
__init__.py properly.
|
||||
|
||||
- refactor internal FixtureRequest handling to avoid monkeypatching.
|
||||
One of the positive user-facing effects is that the "request" object
|
||||
can now be used in closures.
|
||||
|
||||
- fixed version comparison in pytest.importskip(modname, minverstring)
|
||||
|
||||
- fix issue377 by clarifying in the nose-compat docs that pytest
|
||||
does not duplicate the unittest-API into the "plain" namespace.
|
||||
|
||||
- fix verbose reporting for @mock'd test functions
|
||||
|
||||
Changes between 2.4.1 and 2.4.2
|
||||
-----------------------------------
|
||||
|
||||
|
||||
@@ -1,6 +1,10 @@
|
||||
|
||||
Documentation: http://pytest.org/latest/
|
||||
|
||||
Changelog: http://pytest.org/latest/changelog.html
|
||||
|
||||
Issues: https://bitbucket.org/hpk42/pytest/issues?status=open
|
||||
|
||||
The ``py.test`` testing tool makes it easy to write small tests, yet
|
||||
scales to support complex functional testing. It provides
|
||||
|
||||
|
||||
@@ -1,2 +1,2 @@
|
||||
#
|
||||
__version__ = '2.4.2'
|
||||
__version__ = '2.5.0'
|
||||
|
||||
@@ -34,7 +34,7 @@ def pytest_configure(config):
|
||||
mode = "plain"
|
||||
if mode == "rewrite":
|
||||
try:
|
||||
import ast
|
||||
import ast # noqa
|
||||
except ImportError:
|
||||
mode = "reinterp"
|
||||
else:
|
||||
@@ -48,10 +48,10 @@ def pytest_configure(config):
|
||||
m = monkeypatch()
|
||||
config._cleanup.append(m.undo)
|
||||
m.setattr(py.builtin.builtins, 'AssertionError',
|
||||
reinterpret.AssertionError)
|
||||
reinterpret.AssertionError) # noqa
|
||||
hook = None
|
||||
if mode == "rewrite":
|
||||
hook = rewrite.AssertionRewritingHook()
|
||||
hook = rewrite.AssertionRewritingHook() # noqa
|
||||
sys.meta_path.insert(0, hook)
|
||||
warn_about_missing_assertion(mode)
|
||||
config._assertstate = AssertionState(config, mode)
|
||||
@@ -78,10 +78,13 @@ def pytest_runtest_setup(item):
|
||||
|
||||
for new_expl in hook_result:
|
||||
if new_expl:
|
||||
# Don't include pageloads of data unless we are very verbose (-vv)
|
||||
if len(''.join(new_expl[1:])) > 80*8 and item.config.option.verbose < 2:
|
||||
new_expl[1:] = ['Detailed information truncated, use "-vv" to see']
|
||||
res = '\n~'.join(new_expl)
|
||||
# Don't include pageloads of data unless we are very
|
||||
# verbose (-vv)
|
||||
if (len(py.builtin._totext('').join(new_expl[1:])) > 80*8
|
||||
and item.config.option.verbose < 2):
|
||||
new_expl[1:] = [py.builtin._totext(
|
||||
'Detailed information truncated, use "-vv" to see')]
|
||||
res = py.builtin._totext('\n~').join(new_expl)
|
||||
if item.config.getvalue("assertmode") == "rewrite":
|
||||
# The result will be fed back a python % formatting
|
||||
# operation, which will fail if there are extraneous
|
||||
@@ -101,9 +104,9 @@ def pytest_sessionfinish(session):
|
||||
def _load_modules(mode):
|
||||
"""Lazily import assertion related code."""
|
||||
global rewrite, reinterpret
|
||||
from _pytest.assertion import reinterpret
|
||||
from _pytest.assertion import reinterpret # noqa
|
||||
if mode == "rewrite":
|
||||
from _pytest.assertion import rewrite
|
||||
from _pytest.assertion import rewrite # noqa
|
||||
|
||||
def warn_about_missing_assertion(mode):
|
||||
try:
|
||||
|
||||
@@ -1,18 +1,26 @@
|
||||
import sys
|
||||
import py
|
||||
from _pytest.assertion.util import BuiltinAssertionError
|
||||
u = py.builtin._totext
|
||||
|
||||
|
||||
class AssertionError(BuiltinAssertionError):
|
||||
def __init__(self, *args):
|
||||
BuiltinAssertionError.__init__(self, *args)
|
||||
if args:
|
||||
# on Python2.6 we get len(args)==2 for: assert 0, (x,y)
|
||||
# on Python2.7 and above we always get len(args) == 1
|
||||
# with args[0] being the (x,y) tuple.
|
||||
if len(args) > 1:
|
||||
toprint = args
|
||||
else:
|
||||
toprint = args[0]
|
||||
try:
|
||||
self.msg = str(args[0])
|
||||
except py.builtin._sysex:
|
||||
raise
|
||||
except:
|
||||
self.msg = "<[broken __repr__] %s at %0xd>" %(
|
||||
args[0].__class__, id(args[0]))
|
||||
self.msg = u(toprint)
|
||||
except Exception:
|
||||
self.msg = u(
|
||||
"<[broken __repr__] %s at %0xd>"
|
||||
% (toprint.__class__, id(toprint)))
|
||||
else:
|
||||
f = py.code.Frame(sys._getframe(1))
|
||||
try:
|
||||
|
||||
@@ -41,6 +41,7 @@ class AssertionRewritingHook(object):
|
||||
def __init__(self):
|
||||
self.session = None
|
||||
self.modules = {}
|
||||
self._register_with_pkg_resources()
|
||||
|
||||
def set_session(self, session):
|
||||
self.fnpats = session.config.getini("python_files")
|
||||
@@ -55,8 +56,12 @@ class AssertionRewritingHook(object):
|
||||
names = name.rsplit(".", 1)
|
||||
lastname = names[-1]
|
||||
pth = None
|
||||
if path is not None and len(path) == 1:
|
||||
pth = path[0]
|
||||
if path is not None:
|
||||
# Starting with Python 3.3, path is a _NamespacePath(), which
|
||||
# causes problems if not converted to list.
|
||||
path = list(path)
|
||||
if len(path) == 1:
|
||||
pth = path[0]
|
||||
if pth is None:
|
||||
try:
|
||||
fd, fn, desc = imp.find_module(lastname, path)
|
||||
@@ -169,6 +174,24 @@ class AssertionRewritingHook(object):
|
||||
tp = desc[2]
|
||||
return tp == imp.PKG_DIRECTORY
|
||||
|
||||
@classmethod
|
||||
def _register_with_pkg_resources(cls):
|
||||
"""
|
||||
Ensure package resources can be loaded from this loader. May be called
|
||||
multiple times, as the operation is idempotent.
|
||||
"""
|
||||
try:
|
||||
import pkg_resources
|
||||
# access an attribute in case a deferred importer is present
|
||||
pkg_resources.__name__
|
||||
except ImportError:
|
||||
return
|
||||
|
||||
# Since pytest tests are always located in the file system, the
|
||||
# DefaultProvider is appropriate.
|
||||
pkg_resources.register_loader_type(cls, pkg_resources.DefaultProvider)
|
||||
|
||||
|
||||
def _write_pyc(state, co, source_path, pyc):
|
||||
# Technically, we don't have to have the same pyc format as
|
||||
# (C)Python, since these "pycs" should never be seen by builtin
|
||||
@@ -196,7 +219,7 @@ def _write_pyc(state, co, source_path, pyc):
|
||||
RN = "\r\n".encode("utf-8")
|
||||
N = "\n".encode("utf-8")
|
||||
|
||||
cookie_re = re.compile("coding[:=]\s*[-\w.]+")
|
||||
cookie_re = re.compile(r"^[ \t\f]*#.*coding[:=][ \t]*[-\w.]+")
|
||||
BOM_UTF8 = '\xef\xbb\xbf'
|
||||
|
||||
def _rewrite_test(state, fn):
|
||||
@@ -220,8 +243,8 @@ def _rewrite_test(state, fn):
|
||||
end1 = source.find("\n")
|
||||
end2 = source.find("\n", end1 + 1)
|
||||
if (not source.startswith(BOM_UTF8) and
|
||||
(not cookie_re.match(source[0:end1]) or
|
||||
not cookie_re.match(source[end1:end2]))):
|
||||
cookie_re.match(source[0:end1]) is None and
|
||||
cookie_re.match(source[end1 + 1:end2]) is None):
|
||||
if hasattr(state, "_indecode"):
|
||||
return None # encodings imported us again, we don't rewrite
|
||||
state._indecode = True
|
||||
@@ -300,7 +323,7 @@ def rewrite_asserts(mod):
|
||||
|
||||
|
||||
_saferepr = py.io.saferepr
|
||||
from _pytest.assertion.util import format_explanation as _format_explanation
|
||||
from _pytest.assertion.util import format_explanation as _format_explanation # noqa
|
||||
|
||||
def _should_repr_global_name(obj):
|
||||
return not hasattr(obj, "__name__") and not py.builtin.callable(obj)
|
||||
@@ -538,7 +561,8 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
for i, v in enumerate(boolop.values):
|
||||
if i:
|
||||
fail_inner = []
|
||||
self.on_failure.append(ast.If(cond, fail_inner, []))
|
||||
# cond is set in a prior loop iteration below
|
||||
self.on_failure.append(ast.If(cond, fail_inner, [])) # noqa
|
||||
self.on_failure = fail_inner
|
||||
self.push_format_context()
|
||||
res, expl = self.visit(v)
|
||||
@@ -631,7 +655,7 @@ class AssertionRewriter(ast.NodeVisitor):
|
||||
res_expr = ast.Compare(left_res, [op], [next_res])
|
||||
self.statements.append(ast.Assign([store_names[i]], res_expr))
|
||||
left_res, left_expl = next_res, next_expl
|
||||
# Use py.code._reprcompare if that's available.
|
||||
# Use pytest.assertion.util._reprcompare if that's available.
|
||||
expl_call = self.helper("call_reprcompare",
|
||||
ast.Tuple(syms, ast.Load()),
|
||||
ast.Tuple(load_names, ast.Load()),
|
||||
|
||||
@@ -11,6 +11,7 @@ except ImportError:
|
||||
|
||||
|
||||
BuiltinAssertionError = py.builtin.builtins.AssertionError
|
||||
u = py.builtin._totext
|
||||
|
||||
# The _reprcompare attribute on the util module is used by the new assertion
|
||||
# interpretation code and assertion rewriter to detect this plugin was
|
||||
@@ -29,7 +30,18 @@ def format_explanation(explanation):
|
||||
for when one explanation needs to span multiple lines, e.g. when
|
||||
displaying diffs.
|
||||
"""
|
||||
# simplify 'assert False where False = ...'
|
||||
explanation = _collapse_false(explanation)
|
||||
lines = _split_explanation(explanation)
|
||||
result = _format_lines(lines)
|
||||
return u('\n').join(result)
|
||||
|
||||
|
||||
def _collapse_false(explanation):
|
||||
"""Collapse expansions of False
|
||||
|
||||
So this strips out any "assert False\n{where False = ...\n}"
|
||||
blocks.
|
||||
"""
|
||||
where = 0
|
||||
while True:
|
||||
start = where = explanation.find("False\n{False = ", where)
|
||||
@@ -51,28 +63,48 @@ def format_explanation(explanation):
|
||||
explanation = (explanation[:start] + explanation[start+15:end-1] +
|
||||
explanation[end+1:])
|
||||
where -= 17
|
||||
raw_lines = (explanation or '').split('\n')
|
||||
# escape newlines not followed by {, } and ~
|
||||
return explanation
|
||||
|
||||
|
||||
def _split_explanation(explanation):
|
||||
"""Return a list of individual lines in the explanation
|
||||
|
||||
This will return a list of lines split on '\n{', '\n}' and '\n~'.
|
||||
Any other newlines will be escaped and appear in the line as the
|
||||
literal '\n' characters.
|
||||
"""
|
||||
raw_lines = (explanation or u('')).split('\n')
|
||||
lines = [raw_lines[0]]
|
||||
for l in raw_lines[1:]:
|
||||
if l.startswith('{') or l.startswith('}') or l.startswith('~'):
|
||||
lines.append(l)
|
||||
else:
|
||||
lines[-1] += '\\n' + l
|
||||
return lines
|
||||
|
||||
|
||||
def _format_lines(lines):
|
||||
"""Format the individual lines
|
||||
|
||||
This will replace the '{', '}' and '~' characters of our mini
|
||||
formatting language with the proper 'where ...', 'and ...' and ' +
|
||||
...' text, taking care of indentation along the way.
|
||||
|
||||
Return a list of formatted lines.
|
||||
"""
|
||||
result = lines[:1]
|
||||
stack = [0]
|
||||
stackcnt = [0]
|
||||
for line in lines[1:]:
|
||||
if line.startswith('{'):
|
||||
if stackcnt[-1]:
|
||||
s = 'and '
|
||||
s = u('and ')
|
||||
else:
|
||||
s = 'where '
|
||||
s = u('where ')
|
||||
stack.append(len(result))
|
||||
stackcnt[-1] += 1
|
||||
stackcnt.append(0)
|
||||
result.append(' +' + ' '*(len(stack)-1) + s + line[1:])
|
||||
result.append(u(' +') + u(' ')*(len(stack)-1) + s + line[1:])
|
||||
elif line.startswith('}'):
|
||||
assert line.startswith('}')
|
||||
stack.pop()
|
||||
@@ -80,9 +112,9 @@ def format_explanation(explanation):
|
||||
result[stack[-1]] += line[1:]
|
||||
else:
|
||||
assert line.startswith('~')
|
||||
result.append(' '*len(stack) + line[1:])
|
||||
result.append(u(' ')*len(stack) + line[1:])
|
||||
assert len(stack) == 1
|
||||
return '\n'.join(result)
|
||||
return result
|
||||
|
||||
|
||||
# Provide basestring in python3
|
||||
@@ -97,7 +129,7 @@ def assertrepr_compare(config, op, left, right):
|
||||
width = 80 - 15 - len(op) - 2 # 15 chars indentation, 1 space around op
|
||||
left_repr = py.io.saferepr(left, maxsize=int(width/2))
|
||||
right_repr = py.io.saferepr(right, maxsize=width-len(left_repr))
|
||||
summary = '%s %s %s' % (left_repr, op, right_repr)
|
||||
summary = u('%s %s %s') % (left_repr, op, right_repr)
|
||||
|
||||
issequence = lambda x: (isinstance(x, (list, tuple, Sequence))
|
||||
and not isinstance(x, basestring))
|
||||
@@ -120,13 +152,12 @@ def assertrepr_compare(config, op, left, right):
|
||||
elif op == 'not in':
|
||||
if istext(left) and istext(right):
|
||||
explanation = _notin_text(left, right, verbose)
|
||||
except py.builtin._sysex:
|
||||
raise
|
||||
except:
|
||||
except Exception:
|
||||
excinfo = py.code.ExceptionInfo()
|
||||
explanation = [
|
||||
'(pytest_assertion plugin: representation of details failed. '
|
||||
'Probably an object has a faulty __repr__.)', str(excinfo)]
|
||||
u('(pytest_assertion plugin: representation of details failed. '
|
||||
'Probably an object has a faulty __repr__.)'),
|
||||
u(excinfo)]
|
||||
|
||||
if not explanation:
|
||||
return None
|
||||
@@ -148,8 +179,8 @@ def _diff_text(left, right, verbose=False):
|
||||
break
|
||||
if i > 42:
|
||||
i -= 10 # Provide some context
|
||||
explanation = ['Skipping %s identical leading '
|
||||
'characters in diff, use -v to show' % i]
|
||||
explanation = [u('Skipping %s identical leading '
|
||||
'characters in diff, use -v to show') % i]
|
||||
left = left[i:]
|
||||
right = right[i:]
|
||||
if len(left) == len(right):
|
||||
@@ -158,8 +189,8 @@ def _diff_text(left, right, verbose=False):
|
||||
break
|
||||
if i > 42:
|
||||
i -= 10 # Provide some context
|
||||
explanation += ['Skipping %s identical trailing '
|
||||
'characters in diff, use -v to show' % i]
|
||||
explanation += [u('Skipping %s identical trailing '
|
||||
'characters in diff, use -v to show') % i]
|
||||
left = left[:-i]
|
||||
right = right[:-i]
|
||||
explanation += [line.strip('\n')
|
||||
@@ -172,16 +203,15 @@ def _compare_eq_sequence(left, right, verbose=False):
|
||||
explanation = []
|
||||
for i in range(min(len(left), len(right))):
|
||||
if left[i] != right[i]:
|
||||
explanation += ['At index %s diff: %r != %r' %
|
||||
(i, left[i], right[i])]
|
||||
explanation += [u('At index %s diff: %r != %r')
|
||||
% (i, left[i], right[i])]
|
||||
break
|
||||
if len(left) > len(right):
|
||||
explanation += [
|
||||
'Left contains more items, first extra item: %s' %
|
||||
py.io.saferepr(left[len(right)],)]
|
||||
explanation += [u('Left contains more items, first extra item: %s')
|
||||
% py.io.saferepr(left[len(right)],)]
|
||||
elif len(left) < len(right):
|
||||
explanation += [
|
||||
'Right contains more items, first extra item: %s' %
|
||||
u('Right contains more items, first extra item: %s') %
|
||||
py.io.saferepr(right[len(left)],)]
|
||||
return explanation # + _diff_text(py.std.pprint.pformat(left),
|
||||
# py.std.pprint.pformat(right))
|
||||
@@ -192,11 +222,11 @@ def _compare_eq_set(left, right, verbose=False):
|
||||
diff_left = left - right
|
||||
diff_right = right - left
|
||||
if diff_left:
|
||||
explanation.append('Extra items in the left set:')
|
||||
explanation.append(u('Extra items in the left set:'))
|
||||
for item in diff_left:
|
||||
explanation.append(py.io.saferepr(item))
|
||||
if diff_right:
|
||||
explanation.append('Extra items in the right set:')
|
||||
explanation.append(u('Extra items in the right set:'))
|
||||
for item in diff_right:
|
||||
explanation.append(py.io.saferepr(item))
|
||||
return explanation
|
||||
@@ -207,25 +237,25 @@ def _compare_eq_dict(left, right, verbose=False):
|
||||
common = set(left).intersection(set(right))
|
||||
same = dict((k, left[k]) for k in common if left[k] == right[k])
|
||||
if same and not verbose:
|
||||
explanation += ['Omitting %s identical items, use -v to show' %
|
||||
explanation += [u('Omitting %s identical items, use -v to show') %
|
||||
len(same)]
|
||||
elif same:
|
||||
explanation += ['Common items:']
|
||||
explanation += [u('Common items:')]
|
||||
explanation += py.std.pprint.pformat(same).splitlines()
|
||||
diff = set(k for k in common if left[k] != right[k])
|
||||
if diff:
|
||||
explanation += ['Differing items:']
|
||||
explanation += [u('Differing items:')]
|
||||
for k in diff:
|
||||
explanation += [py.io.saferepr({k: left[k]}) + ' != ' +
|
||||
py.io.saferepr({k: right[k]})]
|
||||
extra_left = set(left) - set(right)
|
||||
if extra_left:
|
||||
explanation.append('Left contains more items:')
|
||||
explanation.append(u('Left contains more items:'))
|
||||
explanation.extend(py.std.pprint.pformat(
|
||||
dict((k, left[k]) for k in extra_left)).splitlines())
|
||||
extra_right = set(right) - set(left)
|
||||
if extra_right:
|
||||
explanation.append('Right contains more items:')
|
||||
explanation.append(u('Right contains more items:'))
|
||||
explanation.extend(py.std.pprint.pformat(
|
||||
dict((k, right[k]) for k in extra_right)).splitlines())
|
||||
return explanation
|
||||
@@ -237,14 +267,14 @@ def _notin_text(term, text, verbose=False):
|
||||
tail = text[index+len(term):]
|
||||
correct_text = head + tail
|
||||
diff = _diff_text(correct_text, text, verbose)
|
||||
newdiff = ['%s is contained here:' % py.io.saferepr(term, maxsize=42)]
|
||||
newdiff = [u('%s is contained here:') % py.io.saferepr(term, maxsize=42)]
|
||||
for line in diff:
|
||||
if line.startswith('Skipping'):
|
||||
if line.startswith(u('Skipping')):
|
||||
continue
|
||||
if line.startswith('- '):
|
||||
if line.startswith(u('- ')):
|
||||
continue
|
||||
if line.startswith('+ '):
|
||||
newdiff.append(' ' + line[2:])
|
||||
if line.startswith(u('+ ')):
|
||||
newdiff.append(u(' ') + line[2:])
|
||||
else:
|
||||
newdiff.append(line)
|
||||
return newdiff
|
||||
|
||||
@@ -16,8 +16,7 @@ def main(args=None, plugins=None):
|
||||
initialization.
|
||||
"""
|
||||
config = _prepareconfig(args, plugins)
|
||||
exitstatus = config.hook.pytest_cmdline_main(config=config)
|
||||
return exitstatus
|
||||
return config.hook.pytest_cmdline_main(config=config)
|
||||
|
||||
class cmdline: # compatibility namespace
|
||||
main = staticmethod(main)
|
||||
@@ -41,7 +40,7 @@ def get_plugin_manager():
|
||||
return _preinit.pop(0)
|
||||
# subsequent calls to main will create a fresh instance
|
||||
pluginmanager = PytestPluginManager()
|
||||
pluginmanager.config = config = Config(pluginmanager) # XXX attr needed?
|
||||
pluginmanager.config = Config(pluginmanager) # XXX attr needed?
|
||||
for spec in default_plugins:
|
||||
pluginmanager.import_plugin(spec)
|
||||
return pluginmanager
|
||||
|
||||
@@ -263,20 +263,11 @@ def importplugin(importspec):
|
||||
name = importspec
|
||||
try:
|
||||
mod = "_pytest." + name
|
||||
#print >>sys.stderr, "tryimport", mod
|
||||
__import__(mod)
|
||||
return sys.modules[mod]
|
||||
except ImportError:
|
||||
#e = py.std.sys.exc_info()[1]
|
||||
#if str(e).find(name) == -1:
|
||||
# raise
|
||||
pass #
|
||||
try:
|
||||
#print >>sys.stderr, "tryimport", importspec
|
||||
__import__(importspec)
|
||||
except ImportError:
|
||||
raise ImportError(importspec)
|
||||
return sys.modules[importspec]
|
||||
return sys.modules[importspec]
|
||||
|
||||
class MultiCall:
|
||||
""" execute a call into multiple python functions/methods. """
|
||||
@@ -313,19 +304,36 @@ class MultiCall:
|
||||
return kwargs
|
||||
|
||||
def varnames(func):
|
||||
""" return argument name tuple for a function, method, class or callable.
|
||||
|
||||
In case of a class, its "__init__" method is considered.
|
||||
For methods the "self" parameter is not included unless you are passing
|
||||
an unbound method with Python3 (which has no supports for unbound methods)
|
||||
"""
|
||||
cache = getattr(func, "__dict__", {})
|
||||
try:
|
||||
return func._varnames
|
||||
except AttributeError:
|
||||
return cache["_varnames"]
|
||||
except KeyError:
|
||||
pass
|
||||
if not inspect.isfunction(func) and not inspect.ismethod(func):
|
||||
func = getattr(func, '__call__', func)
|
||||
ismethod = inspect.ismethod(func)
|
||||
if inspect.isclass(func):
|
||||
try:
|
||||
func = func.__init__
|
||||
except AttributeError:
|
||||
return ()
|
||||
ismethod = True
|
||||
else:
|
||||
if not inspect.isfunction(func) and not inspect.ismethod(func):
|
||||
func = getattr(func, '__call__', func)
|
||||
ismethod = inspect.ismethod(func)
|
||||
rawcode = py.code.getrawcode(func)
|
||||
try:
|
||||
x = rawcode.co_varnames[ismethod:rawcode.co_argcount]
|
||||
except AttributeError:
|
||||
x = ()
|
||||
py.builtin._getfuncdict(func)['_varnames'] = x
|
||||
try:
|
||||
cache["_varnames"] = x
|
||||
except TypeError:
|
||||
pass
|
||||
return x
|
||||
|
||||
class HookRelay:
|
||||
|
||||
@@ -91,8 +91,13 @@ class DoctestTextfile(DoctestItem, pytest.File):
|
||||
doctest = py.std.doctest
|
||||
# satisfy `FixtureRequest` constructor...
|
||||
self.funcargs = {}
|
||||
self._fixtureinfo = FuncFixtureInfo((), [], {})
|
||||
fm = self.session._fixturemanager
|
||||
def func():
|
||||
pass
|
||||
self._fixtureinfo = fm.getfixtureinfo(node=self, func=func,
|
||||
cls=None, funcargs=False)
|
||||
fixture_request = FixtureRequest(self)
|
||||
fixture_request._fillfixtures()
|
||||
failed, tot = doctest.testfile(
|
||||
str(self.fspath), module_relative=False,
|
||||
optionflags=doctest.ELLIPSIS,
|
||||
|
||||
@@ -120,7 +120,6 @@ def pytest_report_header(config):
|
||||
|
||||
if config.option.traceconfig:
|
||||
lines.append("active plugins:")
|
||||
plugins = []
|
||||
items = config.pluginmanager._name2plugin.items()
|
||||
for name, plugin in items:
|
||||
if hasattr(plugin, '__file__'):
|
||||
|
||||
@@ -9,7 +9,6 @@ import re
|
||||
import sys
|
||||
import time
|
||||
|
||||
|
||||
# Python 2.X and 3.X compatibility
|
||||
try:
|
||||
unichr(65)
|
||||
@@ -131,31 +130,31 @@ class LogXML(object):
|
||||
self.skipped += 1
|
||||
else:
|
||||
fail = Junit.failure(message="test failure")
|
||||
fail.append(str(report.longrepr))
|
||||
fail.append(unicode(report.longrepr))
|
||||
self.append(fail)
|
||||
self.failed += 1
|
||||
self._write_captured_output(report)
|
||||
|
||||
def append_collect_failure(self, report):
|
||||
#msg = str(report.longrepr.reprtraceback.extraline)
|
||||
self.append(Junit.failure(str(report.longrepr),
|
||||
self.append(Junit.failure(unicode(report.longrepr),
|
||||
message="collection failure"))
|
||||
self.errors += 1
|
||||
|
||||
def append_collect_skipped(self, report):
|
||||
#msg = str(report.longrepr.reprtraceback.extraline)
|
||||
self.append(Junit.skipped(str(report.longrepr),
|
||||
self.append(Junit.skipped(unicode(report.longrepr),
|
||||
message="collection skipped"))
|
||||
self.skipped += 1
|
||||
|
||||
def append_error(self, report):
|
||||
self.append(Junit.error(str(report.longrepr),
|
||||
self.append(Junit.error(unicode(report.longrepr),
|
||||
message="test setup failure"))
|
||||
self.errors += 1
|
||||
|
||||
def append_skipped(self, report):
|
||||
if hasattr(report, "wasxfail"):
|
||||
self.append(Junit.skipped(str(report.wasxfail),
|
||||
self.append(Junit.skipped(unicode(report.wasxfail),
|
||||
message="expected test failure"))
|
||||
else:
|
||||
filename, lineno, skipreason = report.longrepr
|
||||
@@ -201,10 +200,10 @@ class LogXML(object):
|
||||
classname="pytest",
|
||||
name="internal"))
|
||||
|
||||
def pytest_sessionstart(self, session):
|
||||
def pytest_sessionstart(self):
|
||||
self.suite_start_time = time.time()
|
||||
|
||||
def pytest_sessionfinish(self, session, exitstatus, __multicall__):
|
||||
def pytest_sessionfinish(self):
|
||||
if py.std.sys.version_info[0] < 3:
|
||||
logfile = py.std.codecs.open(self.logfile, 'w', encoding='utf-8')
|
||||
else:
|
||||
|
||||
@@ -230,6 +230,8 @@ class Node(object):
|
||||
#: allow adding of extra keywords to use for matching
|
||||
self.extra_keyword_matches = set()
|
||||
|
||||
# used for storing artificial fixturedefs for direct parametrization
|
||||
self._name2pseudofixturedef = {}
|
||||
#self.extrainit()
|
||||
|
||||
@property
|
||||
@@ -270,21 +272,11 @@ class Node(object):
|
||||
self._nodeid = x = self._makeid()
|
||||
return x
|
||||
|
||||
|
||||
def _makeid(self):
|
||||
return self.parent.nodeid + "::" + self.name
|
||||
|
||||
def __eq__(self, other):
|
||||
if not isinstance(other, Node):
|
||||
return False
|
||||
return (self.__class__ == other.__class__ and
|
||||
self.name == other.name and self.parent == other.parent)
|
||||
|
||||
def __ne__(self, other):
|
||||
return not self == other
|
||||
|
||||
def __hash__(self):
|
||||
return hash((self.name, self.parent))
|
||||
return hash(self.nodeid)
|
||||
|
||||
def setup(self):
|
||||
pass
|
||||
@@ -365,6 +357,8 @@ class Node(object):
|
||||
self.session._setupstate.addfinalizer(fin, self)
|
||||
|
||||
def getparent(self, cls):
|
||||
""" get the next parent node (including ourself)
|
||||
which is an instance of the given class"""
|
||||
current = self
|
||||
while current and not isinstance(current, cls):
|
||||
current = current.parent
|
||||
@@ -425,7 +419,6 @@ class Collector(Node):
|
||||
|
||||
def _prunetraceback(self, excinfo):
|
||||
if hasattr(self, 'fspath'):
|
||||
path = self.fspath
|
||||
traceback = excinfo.traceback
|
||||
ntraceback = traceback.cut(path=self.fspath)
|
||||
if ntraceback == traceback:
|
||||
@@ -698,11 +691,4 @@ class Session(FSCollector):
|
||||
yield x
|
||||
node.ihook.pytest_collectreport(report=rep)
|
||||
|
||||
def getfslineno(obj):
|
||||
# xxx let decorators etc specify a sane ordering
|
||||
if hasattr(obj, 'place_as'):
|
||||
obj = obj.place_as
|
||||
fslineno = py.code.getfslineno(obj)
|
||||
assert isinstance(fslineno[1], int), obj
|
||||
return fslineno
|
||||
|
||||
|
||||
@@ -140,7 +140,13 @@ def matchkeyword(colitem, keywordexpr):
|
||||
for name in colitem.function.__dict__:
|
||||
mapped_names.add(name)
|
||||
|
||||
return eval(keywordexpr, {}, KeywordMapping(mapped_names))
|
||||
mapping = KeywordMapping(mapped_names)
|
||||
if " " not in keywordexpr:
|
||||
# special case to allow for simple "-k pass" and "-k 1.3"
|
||||
return mapping[keywordexpr]
|
||||
elif keywordexpr.startswith("not ") and " " not in keywordexpr[4:]:
|
||||
return not mapping[keywordexpr[4:]]
|
||||
return eval(keywordexpr, {}, mapping)
|
||||
|
||||
|
||||
def pytest_configure(config):
|
||||
@@ -182,6 +188,9 @@ class MarkGenerator:
|
||||
if name not in self._markers:
|
||||
raise AttributeError("%r not a registered marker" % (name,))
|
||||
|
||||
def istestfunc(func):
|
||||
return hasattr(func, "__call__") and \
|
||||
getattr(func, "__name__", "<lambda>") != "<lambda>"
|
||||
|
||||
class MarkDecorator:
|
||||
""" A decorator for test functions and test classes. When applied
|
||||
@@ -217,8 +226,8 @@ class MarkDecorator:
|
||||
otherwise add *args/**kwargs in-place to mark information. """
|
||||
if args:
|
||||
func = args[0]
|
||||
if len(args) == 1 and hasattr(func, '__call__') or \
|
||||
hasattr(func, '__bases__'):
|
||||
if len(args) == 1 and (istestfunc(func) or
|
||||
hasattr(func, '__bases__')):
|
||||
if hasattr(func, '__bases__'):
|
||||
if hasattr(func, 'pytestmark'):
|
||||
l = func.pytestmark
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
""" monkeypatching and mocking functionality. """
|
||||
|
||||
import os, sys
|
||||
from py.builtin import _basestring
|
||||
|
||||
def pytest_funcarg__monkeypatch(request):
|
||||
"""The returned ``monkeypatch`` funcarg provides these
|
||||
@@ -28,7 +29,7 @@ def pytest_funcarg__monkeypatch(request):
|
||||
|
||||
def derive_importpath(import_path):
|
||||
import pytest
|
||||
if not isinstance(import_path, str) or "." not in import_path:
|
||||
if not isinstance(import_path, _basestring) or "." not in import_path:
|
||||
raise TypeError("must be absolute import path string, not %r" %
|
||||
(import_path,))
|
||||
rest = []
|
||||
@@ -85,7 +86,7 @@ class monkeypatch:
|
||||
import inspect
|
||||
|
||||
if value is notset:
|
||||
if not isinstance(target, str):
|
||||
if not isinstance(target, _basestring):
|
||||
raise TypeError("use setattr(target, name, value) or "
|
||||
"setattr(target, value) with target being a dotted "
|
||||
"import string")
|
||||
@@ -115,7 +116,7 @@ class monkeypatch:
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
if name is notset:
|
||||
if not isinstance(target, str):
|
||||
if not isinstance(target, _basestring):
|
||||
raise TypeError("use delattr(target, name) or "
|
||||
"delattr(target) with target being a dotted "
|
||||
"import string")
|
||||
|
||||
@@ -26,7 +26,6 @@ def pytest_addoption(parser):
|
||||
def pytest_configure(config):
|
||||
# This might be called multiple times. Only take the first.
|
||||
global _pytest_fullpath
|
||||
import pytest
|
||||
try:
|
||||
_pytest_fullpath
|
||||
except NameError:
|
||||
@@ -121,7 +120,6 @@ class HookRecorder:
|
||||
|
||||
def contains(self, entries):
|
||||
__tracebackhide__ = True
|
||||
from py.builtin import print_
|
||||
i = 0
|
||||
entries = list(entries)
|
||||
backlocals = py.std.sys._getframe(1).f_locals
|
||||
@@ -260,9 +258,6 @@ class TmpTestdir:
|
||||
def makefile(self, ext, *args, **kwargs):
|
||||
return self._makefile(ext, args, kwargs)
|
||||
|
||||
def makeini(self, source):
|
||||
return self.makefile('cfg', setup=source)
|
||||
|
||||
def makeconftest(self, source):
|
||||
return self.makepyfile(conftest=source)
|
||||
|
||||
@@ -475,7 +470,7 @@ class TmpTestdir:
|
||||
# XXX we rely on script refering to the correct environment
|
||||
# we cannot use "(py.std.sys.executable,script)"
|
||||
# becaue on windows the script is e.g. a py.test.exe
|
||||
return (py.std.sys.executable, _pytest_fullpath,)
|
||||
return (py.std.sys.executable, _pytest_fullpath,) # noqa
|
||||
else:
|
||||
py.test.skip("cannot run %r with --no-tools-on-path" % scriptname)
|
||||
|
||||
@@ -521,15 +516,16 @@ class TmpTestdir:
|
||||
return self.spawn(cmd, expect_timeout=expect_timeout)
|
||||
|
||||
def spawn(self, cmd, expect_timeout=10.0):
|
||||
pexpect = py.test.importorskip("pexpect", "2.4")
|
||||
pexpect = py.test.importorskip("pexpect", "3.0")
|
||||
if hasattr(sys, 'pypy_version_info') and '64' in py.std.platform.machine():
|
||||
pytest.skip("pypy-64 bit not supported")
|
||||
if sys.platform == "darwin":
|
||||
pytest.xfail("pexpect does not work reliably on darwin?!")
|
||||
if sys.platform.startswith("freebsd"):
|
||||
pytest.xfail("pexpect does not work reliably on freebsd")
|
||||
logfile = self.tmpdir.join("spawn.out")
|
||||
child = pexpect.spawn(cmd, logfile=logfile.open("w"))
|
||||
logfile = self.tmpdir.join("spawn.out").open("wb")
|
||||
child = pexpect.spawn(cmd, logfile=logfile)
|
||||
self.request.addfinalizer(logfile.close)
|
||||
child.timeout = expect_timeout
|
||||
return child
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -6,7 +6,7 @@ import py
|
||||
|
||||
def pytest_addoption(parser):
|
||||
group = parser.getgroup("terminal reporting", "resultlog plugin options")
|
||||
group.addoption('--resultlog', '--result-log', action="store",
|
||||
group.addoption('--resultlog', '--result-log', action="store",
|
||||
metavar="path", default=None,
|
||||
help="path for machine-readable result log.")
|
||||
|
||||
@@ -85,7 +85,7 @@ class ResultLog(object):
|
||||
if not report.passed:
|
||||
if report.failed:
|
||||
code = "F"
|
||||
longrepr = str(report.longrepr.reprcrash)
|
||||
longrepr = str(report.longrepr)
|
||||
else:
|
||||
assert report.skipped
|
||||
code = "S"
|
||||
|
||||
@@ -193,7 +193,6 @@ def pytest_runtest_makereport(item, call):
|
||||
outcome = "passed"
|
||||
longrepr = None
|
||||
else:
|
||||
excinfo = call.excinfo
|
||||
if not isinstance(excinfo, py.code.ExceptionInfo):
|
||||
outcome = "failed"
|
||||
longrepr = excinfo
|
||||
@@ -328,9 +327,18 @@ class SetupState(object):
|
||||
|
||||
def _callfinalizers(self, colitem):
|
||||
finalizers = self._finalizers.pop(colitem, None)
|
||||
exc = None
|
||||
while finalizers:
|
||||
fin = finalizers.pop()
|
||||
fin()
|
||||
try:
|
||||
fin()
|
||||
except Exception:
|
||||
# XXX Only first exception will be seen by user,
|
||||
# ideally all should be reported.
|
||||
if exc is None:
|
||||
exc = sys.exc_info()
|
||||
if exc:
|
||||
py.builtin._reraise(*exc)
|
||||
|
||||
def _teardown_with_finalization(self, colitem):
|
||||
self._callfinalizers(colitem)
|
||||
@@ -450,25 +458,25 @@ fail.Exception = Failed
|
||||
|
||||
|
||||
def importorskip(modname, minversion=None):
|
||||
""" return imported module if it has a higher __version__ than the
|
||||
optionally specified 'minversion' - otherwise call py.test.skip()
|
||||
with a message detailing the mismatch.
|
||||
""" return imported module if it has at least "minversion" as its
|
||||
__version__ attribute. If no minversion is specified the a skip
|
||||
is only triggered if the module can not be imported.
|
||||
Note that version comparison only works with simple version strings
|
||||
like "1.2.3" but not "1.2.3.dev1" or others.
|
||||
"""
|
||||
__tracebackhide__ = True
|
||||
compile(modname, '', 'eval') # to catch syntaxerrors
|
||||
try:
|
||||
__import__(modname)
|
||||
except ImportError:
|
||||
py.test.skip("could not import %r" %(modname,))
|
||||
skip("could not import %r" %(modname,))
|
||||
mod = sys.modules[modname]
|
||||
if minversion is None:
|
||||
return mod
|
||||
verattr = getattr(mod, '__version__', None)
|
||||
if isinstance(minversion, str):
|
||||
minver = minversion.split(".")
|
||||
else:
|
||||
minver = list(minversion)
|
||||
if verattr is None or verattr.split(".") < minver:
|
||||
py.test.skip("module %r has __version__ %r, required is: %r" %(
|
||||
modname, verattr, minversion))
|
||||
def intver(verstring):
|
||||
return [int(x) for x in verstring.split(".")]
|
||||
if verattr is None or intver(verattr) < intver(minversion):
|
||||
skip("module %r has __version__ %r, required is: %r" %(
|
||||
modname, verattr, minversion))
|
||||
return mod
|
||||
|
||||
@@ -10,6 +10,14 @@ def pytest_addoption(parser):
|
||||
help="run tests even if they are marked xfail")
|
||||
|
||||
def pytest_configure(config):
|
||||
if config.option.runxfail:
|
||||
old = pytest.xfail
|
||||
config._cleanup.append(lambda: setattr(pytest, "xfail", old))
|
||||
def nop(*args, **kwargs):
|
||||
pass
|
||||
nop.Exception = XFailed
|
||||
setattr(pytest, "xfail", nop)
|
||||
|
||||
config.addinivalue_line("markers",
|
||||
"skipif(condition): skip the given test function if eval(condition) "
|
||||
"results in a True value. Evaluation happens within the "
|
||||
@@ -209,7 +217,6 @@ def pytest_terminal_summary(terminalreporter):
|
||||
tr._tw.line(line)
|
||||
|
||||
def show_simple(terminalreporter, lines, stat, format):
|
||||
tw = terminalreporter._tw
|
||||
failed = terminalreporter.stats.get(stat)
|
||||
if failed:
|
||||
for rep in failed:
|
||||
|
||||
@@ -39,7 +39,7 @@ class DictImporter(object):
|
||||
if is_pkg:
|
||||
module.__path__ = [fullname]
|
||||
|
||||
do_exec(co, module.__dict__)
|
||||
do_exec(co, module.__dict__) # noqa
|
||||
return sys.modules[fullname]
|
||||
|
||||
def get_source(self, name):
|
||||
@@ -63,4 +63,4 @@ if __name__ == "__main__":
|
||||
sys.meta_path.insert(0, importer)
|
||||
|
||||
entry = "@ENTRY@"
|
||||
do_exec(entry, locals())
|
||||
do_exec(entry, locals()) # noqa
|
||||
|
||||
@@ -5,7 +5,6 @@ This is a good source for looking at the various reporting hooks.
|
||||
import pytest
|
||||
import py
|
||||
import sys
|
||||
import os
|
||||
|
||||
def pytest_addoption(parser):
|
||||
group = parser.getgroup("terminal reporting", "reporting", after="general")
|
||||
@@ -30,6 +29,10 @@ def pytest_addoption(parser):
|
||||
group._addoption('--fulltrace', '--full-trace',
|
||||
action="store_true", default=False,
|
||||
help="don't cut any tracebacks (default is to cut).")
|
||||
group._addoption('--color', metavar="color",
|
||||
action="store", dest="color", default='auto',
|
||||
choices=['yes', 'no', 'auto'],
|
||||
help="color terminal output (yes/no/auto).")
|
||||
|
||||
def pytest_configure(config):
|
||||
config.option.verbose -= config.option.quiet
|
||||
@@ -86,6 +89,10 @@ class TerminalReporter:
|
||||
if file is None:
|
||||
file = py.std.sys.stdout
|
||||
self._tw = self.writer = py.io.TerminalWriter(file)
|
||||
if self.config.option.color == 'yes':
|
||||
self._tw.hasmarkup = True
|
||||
if self.config.option.color == 'no':
|
||||
self._tw.hasmarkup = False
|
||||
self.currentfspath = None
|
||||
self.reportchars = getreportopt(config)
|
||||
self.hasmarkup = self._tw.hasmarkup
|
||||
|
||||
@@ -50,8 +50,6 @@ class UnitTestCase(pytest.Class):
|
||||
x = getattr(self.obj, name)
|
||||
funcobj = getattr(x, 'im_func', x)
|
||||
transfer_markers(funcobj, cls, module)
|
||||
if hasattr(funcobj, 'todo'):
|
||||
pytest.mark.xfail(reason=str(funcobj.todo))(funcobj)
|
||||
yield TestCaseFunction(name, parent=self)
|
||||
foundsomething = True
|
||||
|
||||
@@ -70,10 +68,6 @@ class TestCaseFunction(pytest.Function):
|
||||
def setup(self):
|
||||
self._testcase = self.parent.obj(self.name)
|
||||
self._obj = getattr(self._testcase, self.name)
|
||||
if hasattr(self._testcase, 'skip'):
|
||||
pytest.skip(self._testcase.skip)
|
||||
if hasattr(self._obj, 'skip'):
|
||||
pytest.skip(self._obj.skip)
|
||||
if hasattr(self._testcase, 'setup_method'):
|
||||
self._testcase.setup_method(self._obj)
|
||||
if hasattr(self, "_request"):
|
||||
|
||||
@@ -1,10 +1,12 @@
|
||||
import sys
|
||||
|
||||
if __name__ == '__main__':
|
||||
import cProfile
|
||||
import py
|
||||
import pstats
|
||||
stats = cProfile.run('py.test.cmdline.main(["empty.py", ])', 'prof')
|
||||
script = sys.argv[1] if len(sys.argv) > 1 else "empty.py"
|
||||
stats = cProfile.run('py.test.cmdline.main([%r])' % script, 'prof')
|
||||
p = pstats.Stats("prof")
|
||||
p.strip_dirs()
|
||||
p.sort_stats('cumulative')
|
||||
print(p.print_stats(50))
|
||||
print(p.print_stats(250))
|
||||
|
||||
12
bench/manyparam.py
Normal file
12
bench/manyparam.py
Normal file
@@ -0,0 +1,12 @@
|
||||
|
||||
import pytest
|
||||
|
||||
@pytest.fixture(scope='module', params=range(966))
|
||||
def foo(request):
|
||||
return request.param
|
||||
|
||||
def test_it(foo):
|
||||
pass
|
||||
def test_it2(foo):
|
||||
pass
|
||||
|
||||
10
bench/skip.py
Normal file
10
bench/skip.py
Normal file
@@ -0,0 +1,10 @@
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
SKIP = True
|
||||
|
||||
@pytest.mark.parametrize("x", xrange(5000))
|
||||
def test_foo(x):
|
||||
if SKIP:
|
||||
pytest.skip("heh")
|
||||
@@ -5,6 +5,7 @@ Release announcements
|
||||
.. toctree::
|
||||
:maxdepth: 2
|
||||
|
||||
release-2.5.0
|
||||
release-2.4.2
|
||||
release-2.4.1
|
||||
release-2.4.0
|
||||
|
||||
175
doc/en/announce/release-2.5.0.txt
Normal file
175
doc/en/announce/release-2.5.0.txt
Normal file
@@ -0,0 +1,175 @@
|
||||
pytest-2.5.0: now down to ZERO reported bugs!
|
||||
===========================================================================
|
||||
|
||||
pytest-2.5.0 is a big fixing release, the result of two community bug
|
||||
fixing days plus numerous additional works from many people and
|
||||
reporters. The release should be fully compatible to 2.4.2, existing
|
||||
plugins and test suites. We aim at maintaining this level of ZERO reported
|
||||
bugs because it's no fun if your testing tool has bugs, is it? Under a
|
||||
condition, though: when submitting a bug report please provide
|
||||
clear information about the circumstances and a simple example which
|
||||
reproduces the problem.
|
||||
|
||||
The issue tracker is of course not empty now. We have many remaining
|
||||
"enhacement" issues which we'll hopefully can tackle in 2014 with your
|
||||
help.
|
||||
|
||||
For those who use older Python versions, please note that pytest is not
|
||||
automatically tested on python2.5 due to virtualenv, setuptools and tox
|
||||
not supporting it anymore. Manual verification shows that it mostly
|
||||
works fine but it's not going to be part of the automated release
|
||||
process and thus likely to break in the future.
|
||||
|
||||
As usual, current docs are at
|
||||
|
||||
http://pytest.org
|
||||
|
||||
and you can upgrade from pypi via::
|
||||
|
||||
pip install -U pytest
|
||||
|
||||
Particular thanks for helping with this release go to Anatoly Bubenkoff,
|
||||
Floris Bruynooghe, Marc Abramowitz, Ralph Schmitt, Ronny Pfannschmidt,
|
||||
Donald Stufft, James Lan, Rob Dennis, Jason R. Coombs, Mathieu Agopian,
|
||||
Virgil Dupras, Bruno Oliveira, Alex Gaynor and others.
|
||||
|
||||
have fun,
|
||||
holger krekel
|
||||
|
||||
|
||||
2.5.0
|
||||
-----------------------------------
|
||||
|
||||
- dropped python2.5 from automated release testing of pytest itself
|
||||
which means it's probably going to break soon (but still works
|
||||
with this release we believe).
|
||||
|
||||
- simplified and fixed implementation for calling finalizers when
|
||||
parametrized fixtures or function arguments are involved. finalization
|
||||
is now performed lazily at setup time instead of in the "teardown phase".
|
||||
While this might sound odd at first, it helps to ensure that we are
|
||||
correctly handling setup/teardown even in complex code. User-level code
|
||||
should not be affected unless it's implementing the pytest_runtest_teardown
|
||||
hook and expecting certain fixture instances are torn down within (very
|
||||
unlikely and would have been unreliable anyway).
|
||||
|
||||
- PR90: add --color=yes|no|auto option to force terminal coloring
|
||||
mode ("auto" is default). Thanks Marc Abramowitz.
|
||||
|
||||
- fix issue319 - correctly show unicode in assertion errors. Many
|
||||
thanks to Floris Bruynooghe for the complete PR. Also means
|
||||
we depend on py>=1.4.19 now.
|
||||
|
||||
- fix issue396 - correctly sort and finalize class-scoped parametrized
|
||||
tests independently from number of methods on the class.
|
||||
|
||||
- refix issue323 in a better way -- parametrization should now never
|
||||
cause Runtime Recursion errors because the underlying algorithm
|
||||
for re-ordering tests per-scope/per-fixture is not recursive
|
||||
anymore (it was tail-call recursive before which could lead
|
||||
to problems for more than >966 non-function scoped parameters).
|
||||
|
||||
- fix issue290 - there is preliminary support now for parametrizing
|
||||
with repeated same values (sometimes useful to to test if calling
|
||||
a second time works as with the first time).
|
||||
|
||||
- close issue240 - document precisely how pytest module importing
|
||||
works, discuss the two common test directory layouts, and how it
|
||||
interacts with PEP420-namespace packages.
|
||||
|
||||
- fix issue246 fix finalizer order to be LIFO on independent fixtures
|
||||
depending on a parametrized higher-than-function scoped fixture.
|
||||
(was quite some effort so please bear with the complexity of this sentence :)
|
||||
Thanks Ralph Schmitt for the precise failure example.
|
||||
|
||||
- fix issue244 by implementing special index for parameters to only use
|
||||
indices for paramentrized test ids
|
||||
|
||||
- fix issue287 by running all finalizers but saving the exception
|
||||
from the first failing finalizer and re-raising it so teardown will
|
||||
still have failed. We reraise the first failing exception because
|
||||
it might be the cause for other finalizers to fail.
|
||||
|
||||
- fix ordering when mock.patch or other standard decorator-wrappings
|
||||
are used with test methods. This fixues issue346 and should
|
||||
help with random "xdist" collection failures. Thanks to
|
||||
Ronny Pfannschmidt and Donald Stufft for helping to isolate it.
|
||||
|
||||
- fix issue357 - special case "-k" expressions to allow for
|
||||
filtering with simple strings that are not valid python expressions.
|
||||
Examples: "-k 1.3" matches all tests parametrized with 1.3.
|
||||
"-k None" filters all tests that have "None" in their name
|
||||
and conversely "-k 'not None'".
|
||||
Previously these examples would raise syntax errors.
|
||||
|
||||
- fix issue384 by removing the trial support code
|
||||
since the unittest compat enhancements allow
|
||||
trial to handle it on its own
|
||||
|
||||
- don't hide an ImportError when importing a plugin produces one.
|
||||
fixes issue375.
|
||||
|
||||
- fix issue275 - allow usefixtures and autouse fixtures
|
||||
for running doctest text files.
|
||||
|
||||
- fix issue380 by making --resultlog only rely on longrepr instead
|
||||
of the "reprcrash" attribute which only exists sometimes.
|
||||
|
||||
- address issue122: allow @pytest.fixture(params=iterator) by exploding
|
||||
into a list early on.
|
||||
|
||||
- fix pexpect-3.0 compatibility for pytest's own tests.
|
||||
(fixes issue386)
|
||||
|
||||
- allow nested parametrize-value markers, thanks James Lan for the PR.
|
||||
|
||||
- fix unicode handling with new monkeypatch.setattr(import_path, value)
|
||||
API. Thanks Rob Dennis. Fixes issue371.
|
||||
|
||||
- fix unicode handling with junitxml, fixes issue368.
|
||||
|
||||
- In assertion rewriting mode on Python 2, fix the detection of coding
|
||||
cookies. See issue #330.
|
||||
|
||||
- make "--runxfail" turn imperative pytest.xfail calls into no ops
|
||||
(it already did neutralize pytest.mark.xfail markers)
|
||||
|
||||
- refine pytest / pkg_resources interactions: The AssertionRewritingHook
|
||||
PEP302 compliant loader now registers itself with setuptools/pkg_resources
|
||||
properly so that the pkg_resources.resource_stream method works properly.
|
||||
Fixes issue366. Thanks for the investigations and full PR to Jason R. Coombs.
|
||||
|
||||
- pytestconfig fixture is now session-scoped as it is the same object during the
|
||||
whole test run. Fixes issue370.
|
||||
|
||||
- avoid one surprising case of marker malfunction/confusion::
|
||||
|
||||
@pytest.mark.some(lambda arg: ...)
|
||||
def test_function():
|
||||
|
||||
would not work correctly because pytest assumes @pytest.mark.some
|
||||
gets a function to be decorated already. We now at least detect if this
|
||||
arg is an lambda and thus the example will work. Thanks Alex Gaynor
|
||||
for bringing it up.
|
||||
|
||||
- xfail a test on pypy that checks wrong encoding/ascii (pypy does
|
||||
not error out). fixes issue385.
|
||||
|
||||
- internally make varnames() deal with classes's __init__,
|
||||
although it's not needed by pytest itself atm. Also
|
||||
fix caching. Fixes issue376.
|
||||
|
||||
- fix issue221 - handle importing of namespace-package with no
|
||||
__init__.py properly.
|
||||
|
||||
- refactor internal FixtureRequest handling to avoid monkeypatching.
|
||||
One of the positive user-facing effects is that the "request" object
|
||||
can now be used in closures.
|
||||
|
||||
- fixed version comparison in pytest.importskip(modname, minverstring)
|
||||
|
||||
- fix issue377 by clarifying in the nose-compat docs that pytest
|
||||
does not duplicate the unittest-API into the "plain" namespace.
|
||||
|
||||
- fix verbose reporting for @mock'd test functions
|
||||
|
||||
@@ -26,7 +26,7 @@ you will see the return value of the function call::
|
||||
|
||||
$ py.test test_assert1.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 1 items
|
||||
|
||||
test_assert1.py F
|
||||
@@ -116,7 +116,7 @@ if you run this module::
|
||||
|
||||
$ py.test test_assert2.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 1 items
|
||||
|
||||
test_assert2.py F
|
||||
|
||||
@@ -64,7 +64,7 @@ of the failing function and hide the other one::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 2 items
|
||||
|
||||
test_module.py .F
|
||||
@@ -78,7 +78,7 @@ of the failing function and hide the other one::
|
||||
|
||||
test_module.py:9: AssertionError
|
||||
----------------------------- Captured stdout ------------------------------
|
||||
setting up <function test_func2 at 0x282d2a8>
|
||||
setting up <function test_func2 at 0x29437d0>
|
||||
==================== 1 failed, 1 passed in 0.01 seconds ====================
|
||||
|
||||
Accessing captured output from a test function
|
||||
|
||||
@@ -121,6 +121,8 @@ Builtin configuration file options
|
||||
.. confval:: python_functions
|
||||
|
||||
One or more name prefixes determining which test functions
|
||||
and methods are considered as test modules.
|
||||
and methods are considered as test modules. Note that this
|
||||
has no effect on methods that live on a ``unittest.TestCase``
|
||||
derived class.
|
||||
|
||||
See :ref:`change naming conventions` for examples.
|
||||
|
||||
@@ -44,7 +44,7 @@ then you can just invoke ``py.test`` without command line options::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 1 items
|
||||
|
||||
mymodule.py .
|
||||
@@ -56,3 +56,7 @@ It is possible to use fixtures using the ``getfixture`` helper::
|
||||
# content of example.rst
|
||||
>>> tmp = getfixture('tmpdir')
|
||||
>>> ...
|
||||
>>>
|
||||
|
||||
Also, :ref:`usefixtures` and :ref:`autouse` fixtures are supported
|
||||
when executing text doctest files.
|
||||
|
||||
@@ -28,7 +28,7 @@ You can then restrict a test run to only run tests marked with ``webtest``::
|
||||
|
||||
$ py.test -v -m webtest
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
collecting ... collected 3 items
|
||||
|
||||
test_server.py:3: test_send_http PASSED
|
||||
@@ -40,7 +40,7 @@ Or the inverse, running all tests except the webtest ones::
|
||||
|
||||
$ py.test -v -m "not webtest"
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
collecting ... collected 3 items
|
||||
|
||||
test_server.py:6: test_something_quick PASSED
|
||||
@@ -61,7 +61,7 @@ select tests based on their names::
|
||||
|
||||
$ py.test -v -k http # running with the above defined example module
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
collecting ... collected 3 items
|
||||
|
||||
test_server.py:3: test_send_http PASSED
|
||||
@@ -73,7 +73,7 @@ And you can also run all tests except the ones that match the keyword::
|
||||
|
||||
$ py.test -k "not send_http" -v
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
collecting ... collected 3 items
|
||||
|
||||
test_server.py:6: test_something_quick PASSED
|
||||
@@ -86,7 +86,7 @@ Or to select "http" and "quick" tests::
|
||||
|
||||
$ py.test -k "http or quick" -v
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
collecting ... collected 3 items
|
||||
|
||||
test_server.py:3: test_send_http PASSED
|
||||
@@ -95,6 +95,17 @@ Or to select "http" and "quick" tests::
|
||||
================= 1 tests deselected by '-khttp or quick' ==================
|
||||
================== 2 passed, 1 deselected in 0.01 seconds ==================
|
||||
|
||||
.. note::
|
||||
|
||||
If you are using expressions such as "X and Y" then both X and Y
|
||||
need to be simple non-keyword names. For example, "pass" or "from"
|
||||
will result in SyntaxErrors because "-k" evaluates the expression.
|
||||
|
||||
However, if the "-k" argument is a simple string, no such restrictions
|
||||
apply. Also "-k 'not STRING'" has no restrictions. You can also
|
||||
specify numbers like "-k 1.3" to match tests which are parametrized
|
||||
with the float "1.3".
|
||||
|
||||
Registering markers
|
||||
-------------------------------------
|
||||
|
||||
@@ -118,7 +129,7 @@ You can ask which markers exist for your test suite - the list includes our just
|
||||
|
||||
@pytest.mark.xfail(condition, reason=None, run=True): mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. See http://pytest.org/latest/skipping.html
|
||||
|
||||
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in multiple different argument value sets. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2. see http://pytest.org/latest/parametrize.html for more info and examples.
|
||||
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
|
||||
|
||||
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
|
||||
|
||||
@@ -255,7 +266,7 @@ the test needs::
|
||||
|
||||
$ py.test -E stage2
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 1 items
|
||||
|
||||
test_someenv.py s
|
||||
@@ -266,7 +277,7 @@ and here is one that specifies exactly the environment needed::
|
||||
|
||||
$ py.test -E stage1
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 1 items
|
||||
|
||||
test_someenv.py .
|
||||
@@ -282,7 +293,7 @@ The ``--markers`` option always gives you a list of available markers::
|
||||
|
||||
@pytest.mark.xfail(condition, reason=None, run=True): mark the the test function as an expected failure if eval(condition) has a True value. Optionally specify a reason for better reporting and run=False if you don't even want to execute the test function. See http://pytest.org/latest/skipping.html
|
||||
|
||||
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in multiple different argument value sets. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2. see http://pytest.org/latest/parametrize.html for more info and examples.
|
||||
@pytest.mark.parametrize(argnames, argvalues): call a test function multiple times passing in different arguments in turn. argvalues generally needs to be a list of values if argnames specifies only one name or a list of tuples of values if argnames specifies multiple names. Example: @parametrize('arg1', [1,2]) would lead to two calls of the decorated test function, one with arg1=1 and another with arg1=2.see http://pytest.org/latest/parametrize.html for more info and examples.
|
||||
|
||||
@pytest.mark.usefixtures(fixturename1, fixturename2, ...): mark tests as needing all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures
|
||||
|
||||
@@ -384,12 +395,12 @@ then you will see two test skipped and two executed tests as expected::
|
||||
|
||||
$ py.test -rs # this option reports skip reasons
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 4 items
|
||||
|
||||
test_plat.py s.s.
|
||||
========================= short test summary info ==========================
|
||||
SKIP [2] /tmp/doc-exec-598/conftest.py:12: cannot run on platform linux2
|
||||
SKIP [2] /tmp/doc-exec-62/conftest.py:12: cannot run on platform linux2
|
||||
|
||||
=================== 2 passed, 2 skipped in 0.01 seconds ====================
|
||||
|
||||
@@ -397,7 +408,7 @@ Note that if you specify a platform via the marker-command line option like this
|
||||
|
||||
$ py.test -m linux2
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 4 items
|
||||
|
||||
test_plat.py .
|
||||
@@ -448,7 +459,7 @@ We can now use the ``-m option`` to select one set::
|
||||
|
||||
$ py.test -m interface --tb=short
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 4 items
|
||||
|
||||
test_module.py FF
|
||||
@@ -469,7 +480,7 @@ or to select both "event" and "interface" tests::
|
||||
|
||||
$ py.test -m "interface or event" --tb=short
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 4 items
|
||||
|
||||
test_module.py FFF
|
||||
@@ -488,4 +499,4 @@ or to select both "event" and "interface" tests::
|
||||
> assert 0
|
||||
E assert 0
|
||||
============= 1 tests deselected by "-m 'interface or event'" ==============
|
||||
================== 3 failed, 1 deselected in 0.02 seconds ==================
|
||||
================== 3 failed, 1 deselected in 0.01 seconds ==================
|
||||
|
||||
@@ -27,7 +27,7 @@ now execute the test specification::
|
||||
|
||||
nonpython $ py.test test_simple.yml
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 2 items
|
||||
|
||||
test_simple.yml .F
|
||||
@@ -56,7 +56,7 @@ consulted when reporting in ``verbose`` mode::
|
||||
|
||||
nonpython $ py.test -v
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_simple.yml:1: usecase: ok PASSED
|
||||
@@ -74,7 +74,7 @@ interesting to just look at the collection tree::
|
||||
|
||||
nonpython $ py.test --collect-only
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 2 items
|
||||
<YamlFile 'test_simple.yml'>
|
||||
<YamlItem 'ok'>
|
||||
|
||||
@@ -106,7 +106,7 @@ this is a fully self-contained example which you can run with::
|
||||
|
||||
$ py.test test_scenarios.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 4 items
|
||||
|
||||
test_scenarios.py ....
|
||||
@@ -118,7 +118,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia
|
||||
|
||||
$ py.test --collect-only test_scenarios.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 4 items
|
||||
<Module 'test_scenarios.py'>
|
||||
<Class 'TestSampleWithScenarios'>
|
||||
@@ -182,7 +182,7 @@ Let's first see how it looks like at collection time::
|
||||
|
||||
$ py.test test_backends.py --collect-only
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 2 items
|
||||
<Module 'test_backends.py'>
|
||||
<Function 'test_db_initialized[d1]'>
|
||||
@@ -197,7 +197,7 @@ And then when we run the test::
|
||||
================================= FAILURES =================================
|
||||
_________________________ test_db_initialized[d2] __________________________
|
||||
|
||||
db = <conftest.DB2 instance at 0x2dbd950>
|
||||
db = <conftest.DB2 instance at 0x1992c20>
|
||||
|
||||
def test_db_initialized(db):
|
||||
# a dummy test
|
||||
@@ -253,14 +253,14 @@ argument sets to use for each test function. Let's run it::
|
||||
================================= FAILURES =================================
|
||||
________________________ TestClass.test_equals[1-2] ________________________
|
||||
|
||||
self = <test_parametrize.TestClass instance at 0x258a6c8>, a = 1, b = 2
|
||||
self = <test_parametrize.TestClass instance at 0x13483b0>, a = 1, b = 2
|
||||
|
||||
def test_equals(self, a, b):
|
||||
> assert a == b
|
||||
E assert 1 == 2
|
||||
|
||||
test_parametrize.py:18: AssertionError
|
||||
1 failed, 2 passed in 0.02 seconds
|
||||
1 failed, 2 passed in 0.01 seconds
|
||||
|
||||
Indirect parametrization with multiple fixtures
|
||||
--------------------------------------------------------------
|
||||
@@ -282,7 +282,7 @@ Running it results in some skips if we don't have all the python interpreters in
|
||||
............sss............sss............sss............ssssssssssssssssss
|
||||
========================= short test summary info ==========================
|
||||
SKIP [27] /home/hpk/p/pytest/doc/en/example/multipython.py:21: 'python2.8' not found
|
||||
48 passed, 27 skipped in 1.37 seconds
|
||||
48 passed, 27 skipped in 1.41 seconds
|
||||
|
||||
Indirect parametrization of optional implementations/imports
|
||||
--------------------------------------------------------------------
|
||||
@@ -329,12 +329,12 @@ If you run this with reporting for skips enabled::
|
||||
|
||||
$ py.test -rs test_module.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 2 items
|
||||
|
||||
test_module.py s.
|
||||
test_module.py .s
|
||||
========================= short test summary info ==========================
|
||||
SKIP [1] /tmp/doc-exec-600/conftest.py:10: could not import 'opt2'
|
||||
SKIP [1] /tmp/doc-exec-64/conftest.py:10: could not import 'opt2'
|
||||
|
||||
=================== 1 passed, 1 skipped in 0.01 seconds ====================
|
||||
|
||||
|
||||
@@ -43,7 +43,7 @@ then the test collection looks like this::
|
||||
|
||||
$ py.test --collect-only
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 2 items
|
||||
<Module 'check_myapp.py'>
|
||||
<Class 'CheckMyApp'>
|
||||
@@ -53,6 +53,12 @@ then the test collection looks like this::
|
||||
|
||||
============================= in 0.01 seconds =============================
|
||||
|
||||
.. note::
|
||||
|
||||
the ``python_functions`` and ``python_classes`` has no effect
|
||||
for ``unittest.TestCase`` test discovery because pytest delegates
|
||||
detection of test case methods to unittest code.
|
||||
|
||||
Interpreting cmdline arguments as Python packages
|
||||
-----------------------------------------------------
|
||||
|
||||
@@ -82,7 +88,7 @@ You can always peek at the collection tree without running tests like this::
|
||||
|
||||
. $ py.test --collect-only pythoncollection.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 3 items
|
||||
<Module 'pythoncollection.py'>
|
||||
<Function 'test_function'>
|
||||
@@ -135,7 +141,7 @@ interpreters and will leave out the setup.py file::
|
||||
|
||||
$ py.test --collect-only
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 1 items
|
||||
<Module 'pkg/module_py2.py'>
|
||||
<Function 'test_only_on_python2'>
|
||||
|
||||
@@ -13,7 +13,7 @@ get on the terminal - we are working on that):
|
||||
|
||||
assertion $ py.test failure_demo.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 39 items
|
||||
|
||||
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
|
||||
@@ -30,7 +30,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:15: AssertionError
|
||||
_________________________ TestFailing.test_simple __________________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0x26f8f50>
|
||||
self = <failure_demo.TestFailing object at 0x1d5e7d0>
|
||||
|
||||
def test_simple(self):
|
||||
def f():
|
||||
@@ -40,13 +40,13 @@ get on the terminal - we are working on that):
|
||||
|
||||
> assert f() == g()
|
||||
E assert 42 == 43
|
||||
E + where 42 = <function f at 0x269d5f0>()
|
||||
E + and 43 = <function g at 0x269d6e0>()
|
||||
E + where 42 = <function f at 0x1cfcb90>()
|
||||
E + and 43 = <function g at 0x1cfcc08>()
|
||||
|
||||
failure_demo.py:28: AssertionError
|
||||
____________________ TestFailing.test_simple_multiline _____________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0x26ade90>
|
||||
self = <failure_demo.TestFailing object at 0x1d0fed0>
|
||||
|
||||
def test_simple_multiline(self):
|
||||
otherfunc_multi(
|
||||
@@ -66,19 +66,19 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:11: AssertionError
|
||||
___________________________ TestFailing.test_not ___________________________
|
||||
|
||||
self = <failure_demo.TestFailing object at 0x26aac10>
|
||||
self = <failure_demo.TestFailing object at 0x1d4bc10>
|
||||
|
||||
def test_not(self):
|
||||
def f():
|
||||
return 42
|
||||
> assert not f()
|
||||
E assert not 42
|
||||
E + where 42 = <function f at 0x269d8c0>()
|
||||
E + where 42 = <function f at 0x1d071b8>()
|
||||
|
||||
failure_demo.py:38: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_text _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x2861490>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1d0bed0>
|
||||
|
||||
def test_eq_text(self):
|
||||
> assert 'spam' == 'eggs'
|
||||
@@ -89,7 +89,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:42: AssertionError
|
||||
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26ade10>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1d0de10>
|
||||
|
||||
def test_eq_similar_text(self):
|
||||
> assert 'foo 1 bar' == 'foo 2 bar'
|
||||
@@ -102,7 +102,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:45: AssertionError
|
||||
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26f8ad0>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1d5e110>
|
||||
|
||||
def test_eq_multiline_text(self):
|
||||
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
|
||||
@@ -115,7 +115,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:48: AssertionError
|
||||
______________ TestSpecialisedExplanations.test_eq_long_text _______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26aa450>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1ec06d0>
|
||||
|
||||
def test_eq_long_text(self):
|
||||
a = '1'*100 + 'a' + '2'*100
|
||||
@@ -132,7 +132,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:53: AssertionError
|
||||
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26ad7d0>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1d0d950>
|
||||
|
||||
def test_eq_long_text_multiline(self):
|
||||
a = '1\n'*100 + 'a' + '2\n'*100
|
||||
@@ -156,7 +156,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:58: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_list _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26f8550>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1d61c50>
|
||||
|
||||
def test_eq_list(self):
|
||||
> assert [0, 1, 2] == [0, 1, 3]
|
||||
@@ -166,7 +166,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:61: AssertionError
|
||||
______________ TestSpecialisedExplanations.test_eq_list_long _______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26aa310>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1d4be10>
|
||||
|
||||
def test_eq_list_long(self):
|
||||
a = [0]*100 + [1] + [3]*100
|
||||
@@ -178,7 +178,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:66: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_dict _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26a6950>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1ec0ad0>
|
||||
|
||||
def test_eq_dict(self):
|
||||
> assert {'a': 0, 'b': 1, 'c': 0} == {'a': 0, 'b': 2, 'd': 0}
|
||||
@@ -194,7 +194,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:69: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_eq_set __________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26e4210>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1d0bbd0>
|
||||
|
||||
def test_eq_set(self):
|
||||
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
|
||||
@@ -210,7 +210,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:72: AssertionError
|
||||
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26f9c10>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1d4bd10>
|
||||
|
||||
def test_eq_longer_list(self):
|
||||
> assert [1,2] == [1,2,3]
|
||||
@@ -220,7 +220,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:75: AssertionError
|
||||
_________________ TestSpecialisedExplanations.test_in_list _________________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26aac50>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1ec0650>
|
||||
|
||||
def test_in_list(self):
|
||||
> assert 1 in [0, 2, 3, 4, 5]
|
||||
@@ -229,7 +229,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:78: AssertionError
|
||||
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26a6b90>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1d0bad0>
|
||||
|
||||
def test_not_in_text_multiline(self):
|
||||
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
|
||||
@@ -247,7 +247,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:82: AssertionError
|
||||
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26f9d90>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1d0d410>
|
||||
|
||||
def test_not_in_text_single(self):
|
||||
text = 'single foo line'
|
||||
@@ -260,7 +260,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:86: AssertionError
|
||||
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26f89d0>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1ec0610>
|
||||
|
||||
def test_not_in_text_single_long(self):
|
||||
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
|
||||
@@ -273,7 +273,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:90: AssertionError
|
||||
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
|
||||
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x26ad310>
|
||||
self = <failure_demo.TestSpecialisedExplanations object at 0x1d5ed50>
|
||||
|
||||
def test_not_in_text_single_long_term(self):
|
||||
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
|
||||
@@ -292,7 +292,7 @@ get on the terminal - we are working on that):
|
||||
i = Foo()
|
||||
> assert i.b == 2
|
||||
E assert 1 == 2
|
||||
E + where 1 = <failure_demo.Foo object at 0x26e4650>.b
|
||||
E + where 1 = <failure_demo.Foo object at 0x1d0da50>.b
|
||||
|
||||
failure_demo.py:101: AssertionError
|
||||
_________________________ test_attribute_instance __________________________
|
||||
@@ -302,8 +302,8 @@ get on the terminal - we are working on that):
|
||||
b = 1
|
||||
> assert Foo().b == 2
|
||||
E assert 1 == 2
|
||||
E + where 1 = <failure_demo.Foo object at 0x26f8c50>.b
|
||||
E + where <failure_demo.Foo object at 0x26f8c50> = <class 'failure_demo.Foo'>()
|
||||
E + where 1 = <failure_demo.Foo object at 0x1d0b8d0>.b
|
||||
E + where <failure_demo.Foo object at 0x1d0b8d0> = <class 'failure_demo.Foo'>()
|
||||
|
||||
failure_demo.py:107: AssertionError
|
||||
__________________________ test_attribute_failure __________________________
|
||||
@@ -319,7 +319,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:116:
|
||||
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
|
||||
|
||||
self = <failure_demo.Foo object at 0x26a65d0>
|
||||
self = <failure_demo.Foo object at 0x1d5eb90>
|
||||
|
||||
def _get_b(self):
|
||||
> raise Exception('Failed to get attrib')
|
||||
@@ -335,15 +335,15 @@ get on the terminal - we are working on that):
|
||||
b = 2
|
||||
> assert Foo().b == Bar().b
|
||||
E assert 1 == 2
|
||||
E + where 1 = <failure_demo.Foo object at 0x26ad050>.b
|
||||
E + where <failure_demo.Foo object at 0x26ad050> = <class 'failure_demo.Foo'>()
|
||||
E + and 2 = <failure_demo.Bar object at 0x26ad850>.b
|
||||
E + where <failure_demo.Bar object at 0x26ad850> = <class 'failure_demo.Bar'>()
|
||||
E + where 1 = <failure_demo.Foo object at 0x1d15c10>.b
|
||||
E + where <failure_demo.Foo object at 0x1d15c10> = <class 'failure_demo.Foo'>()
|
||||
E + and 2 = <failure_demo.Bar object at 0x1d15290>.b
|
||||
E + where <failure_demo.Bar object at 0x1d15290> = <class 'failure_demo.Bar'>()
|
||||
|
||||
failure_demo.py:124: AssertionError
|
||||
__________________________ TestRaises.test_raises __________________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x2859e18>
|
||||
self = <failure_demo.TestRaises instance at 0x1ee2248>
|
||||
|
||||
def test_raises(self):
|
||||
s = 'qwe'
|
||||
@@ -355,10 +355,10 @@ get on the terminal - we are working on that):
|
||||
> int(s)
|
||||
E ValueError: invalid literal for int() with base 10: 'qwe'
|
||||
|
||||
<0-codegen /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:905>:1: ValueError
|
||||
<0-codegen /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:976>:1: ValueError
|
||||
______________________ TestRaises.test_raises_doesnt _______________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x27013b0>
|
||||
self = <failure_demo.TestRaises instance at 0x1d14b48>
|
||||
|
||||
def test_raises_doesnt(self):
|
||||
> raises(IOError, "int('3')")
|
||||
@@ -367,7 +367,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:136: Failed
|
||||
__________________________ TestRaises.test_raise ___________________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x271d9e0>
|
||||
self = <failure_demo.TestRaises instance at 0x1ed9cb0>
|
||||
|
||||
def test_raise(self):
|
||||
> raise ValueError("demo error")
|
||||
@@ -376,7 +376,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:139: ValueError
|
||||
________________________ TestRaises.test_tupleerror ________________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x270b3f8>
|
||||
self = <failure_demo.TestRaises instance at 0x1eeb200>
|
||||
|
||||
def test_tupleerror(self):
|
||||
> a,b = [1]
|
||||
@@ -385,7 +385,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:142: ValueError
|
||||
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x26ab368>
|
||||
self = <failure_demo.TestRaises instance at 0x1eebdd0>
|
||||
|
||||
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
|
||||
l = [1,2,3]
|
||||
@@ -398,7 +398,7 @@ get on the terminal - we are working on that):
|
||||
l is [1, 2, 3]
|
||||
________________________ TestRaises.test_some_error ________________________
|
||||
|
||||
self = <failure_demo.TestRaises instance at 0x271b488>
|
||||
self = <failure_demo.TestRaises instance at 0x1edf758>
|
||||
|
||||
def test_some_error(self):
|
||||
> if namenotexi:
|
||||
@@ -426,7 +426,7 @@ get on the terminal - we are working on that):
|
||||
<2-codegen 'abc-123' /home/hpk/p/pytest/doc/en/example/assertion/failure_demo.py:162>:2: AssertionError
|
||||
____________________ TestMoreErrors.test_complex_error _____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x271da28>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1ed0128>
|
||||
|
||||
def test_complex_error(self):
|
||||
def f():
|
||||
@@ -455,7 +455,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:5: AssertionError
|
||||
___________________ TestMoreErrors.test_z1_unpack_error ____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x2716950>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1ec7f38>
|
||||
|
||||
def test_z1_unpack_error(self):
|
||||
l = []
|
||||
@@ -465,7 +465,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:179: ValueError
|
||||
____________________ TestMoreErrors.test_z2_type_error _____________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x26f5e18>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1ee47a0>
|
||||
|
||||
def test_z2_type_error(self):
|
||||
l = 3
|
||||
@@ -475,19 +475,19 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:183: TypeError
|
||||
______________________ TestMoreErrors.test_startswith ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x27075f0>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1eea2d8>
|
||||
|
||||
def test_startswith(self):
|
||||
s = "123"
|
||||
g = "456"
|
||||
> assert s.startswith(g)
|
||||
E assert <built-in method startswith of str object at 0x26ff8c8>('456')
|
||||
E + where <built-in method startswith of str object at 0x26ff8c8> = '123'.startswith
|
||||
E assert <built-in method startswith of str object at 0x1d63a58>('456')
|
||||
E + where <built-in method startswith of str object at 0x1d63a58> = '123'.startswith
|
||||
|
||||
failure_demo.py:188: AssertionError
|
||||
__________________ TestMoreErrors.test_startswith_nested ___________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x2707ef0>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1ef08c0>
|
||||
|
||||
def test_startswith_nested(self):
|
||||
def f():
|
||||
@@ -495,15 +495,15 @@ get on the terminal - we are working on that):
|
||||
def g():
|
||||
return "456"
|
||||
> assert f().startswith(g())
|
||||
E assert <built-in method startswith of str object at 0x26ff8c8>('456')
|
||||
E + where <built-in method startswith of str object at 0x26ff8c8> = '123'.startswith
|
||||
E + where '123' = <function f at 0x269d7d0>()
|
||||
E + and '456' = <function g at 0x2698ed8>()
|
||||
E assert <built-in method startswith of str object at 0x1d63a58>('456')
|
||||
E + where <built-in method startswith of str object at 0x1d63a58> = '123'.startswith
|
||||
E + where '123' = <function f at 0x1d07500>()
|
||||
E + and '456' = <function g at 0x1cf2b18>()
|
||||
|
||||
failure_demo.py:195: AssertionError
|
||||
_____________________ TestMoreErrors.test_global_func ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x271bef0>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1ed4a70>
|
||||
|
||||
def test_global_func(self):
|
||||
> assert isinstance(globf(42), float)
|
||||
@@ -513,18 +513,18 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:198: AssertionError
|
||||
_______________________ TestMoreErrors.test_instance _______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x271bb90>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1edf998>
|
||||
|
||||
def test_instance(self):
|
||||
self.x = 6*7
|
||||
> assert self.x != 42
|
||||
E assert 42 != 42
|
||||
E + where 42 = <failure_demo.TestMoreErrors instance at 0x271bb90>.x
|
||||
E + where 42 = <failure_demo.TestMoreErrors instance at 0x1edf998>.x
|
||||
|
||||
failure_demo.py:202: AssertionError
|
||||
_______________________ TestMoreErrors.test_compare ________________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x2634170>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1edf3f8>
|
||||
|
||||
def test_compare(self):
|
||||
> assert globf(10) < 5
|
||||
@@ -534,7 +534,7 @@ get on the terminal - we are working on that):
|
||||
failure_demo.py:205: AssertionError
|
||||
_____________________ TestMoreErrors.test_try_finally ______________________
|
||||
|
||||
self = <failure_demo.TestMoreErrors instance at 0x2717f80>
|
||||
self = <failure_demo.TestMoreErrors instance at 0x1ef15f0>
|
||||
|
||||
def test_try_finally(self):
|
||||
x = 1
|
||||
@@ -543,4 +543,4 @@ get on the terminal - we are working on that):
|
||||
E assert 1 == 0
|
||||
|
||||
failure_demo.py:210: AssertionError
|
||||
======================== 39 failed in 0.26 seconds =========================
|
||||
======================== 39 failed in 0.23 seconds =========================
|
||||
|
||||
@@ -108,7 +108,7 @@ directory with the above conftest.py::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 0 items
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
@@ -152,12 +152,12 @@ and when running it will see a skipped "slow" test::
|
||||
|
||||
$ py.test -rs # "-rs" means report details on the little 's'
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 2 items
|
||||
|
||||
test_module.py .s
|
||||
========================= short test summary info ==========================
|
||||
SKIP [1] /tmp/doc-exec-603/conftest.py:9: need --runslow option to run
|
||||
SKIP [1] /tmp/doc-exec-67/conftest.py:9: need --runslow option to run
|
||||
|
||||
=================== 1 passed, 1 skipped in 0.01 seconds ====================
|
||||
|
||||
@@ -165,7 +165,7 @@ Or run it including the ``slow`` marked test::
|
||||
|
||||
$ py.test --runslow
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 2 items
|
||||
|
||||
test_module.py ..
|
||||
@@ -256,7 +256,7 @@ which will add the string to the test header accordingly::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
project deps: mylib-1.1
|
||||
collected 0 items
|
||||
|
||||
@@ -279,7 +279,7 @@ which will add info only when run with "--v"::
|
||||
|
||||
$ py.test -v
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
info1: did you know that ...
|
||||
did you?
|
||||
collecting ... collected 0 items
|
||||
@@ -290,7 +290,7 @@ and nothing when run plainly::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 0 items
|
||||
|
||||
============================= in 0.00 seconds =============================
|
||||
@@ -322,7 +322,7 @@ Now we can profile which test functions execute the slowest::
|
||||
|
||||
$ py.test --durations=3
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 3 items
|
||||
|
||||
test_some_are_slow.py ...
|
||||
@@ -383,7 +383,7 @@ If we run this::
|
||||
|
||||
$ py.test -rx
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 4 items
|
||||
|
||||
test_step.py .Fx.
|
||||
@@ -391,7 +391,7 @@ If we run this::
|
||||
================================= FAILURES =================================
|
||||
____________________ TestUserHandling.test_modification ____________________
|
||||
|
||||
self = <test_step.TestUserHandling instance at 0x1c6fb90>
|
||||
self = <test_step.TestUserHandling instance at 0x192ea28>
|
||||
|
||||
def test_modification(self):
|
||||
> assert 0
|
||||
@@ -453,7 +453,7 @@ We can run this::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 7 items
|
||||
|
||||
test_step.py .Fx.
|
||||
@@ -463,17 +463,17 @@ We can run this::
|
||||
|
||||
================================== ERRORS ==================================
|
||||
_______________________ ERROR at setup of test_root ________________________
|
||||
file /tmp/doc-exec-603/b/test_error.py, line 1
|
||||
file /tmp/doc-exec-67/b/test_error.py, line 1
|
||||
def test_root(db): # no db here, will error out
|
||||
fixture 'db' not found
|
||||
available fixtures: pytestconfig, recwarn, monkeypatch, capfd, capsys, tmpdir
|
||||
available fixtures: monkeypatch, capsys, tmpdir, capfd, pytestconfig, recwarn
|
||||
use 'py.test --fixtures [testpath]' for help on them.
|
||||
|
||||
/tmp/doc-exec-603/b/test_error.py:1
|
||||
/tmp/doc-exec-67/b/test_error.py:1
|
||||
================================= FAILURES =================================
|
||||
____________________ TestUserHandling.test_modification ____________________
|
||||
|
||||
self = <test_step.TestUserHandling instance at 0x22f3518>
|
||||
self = <test_step.TestUserHandling instance at 0x2099a28>
|
||||
|
||||
def test_modification(self):
|
||||
> assert 0
|
||||
@@ -482,20 +482,20 @@ We can run this::
|
||||
test_step.py:9: AssertionError
|
||||
_________________________________ test_a1 __________________________________
|
||||
|
||||
db = <conftest.DB instance at 0x2304248>
|
||||
db = <conftest.DB instance at 0x20a1518>
|
||||
|
||||
def test_a1(db):
|
||||
> assert 0, db # to show value
|
||||
E AssertionError: <conftest.DB instance at 0x2304248>
|
||||
E AssertionError: <conftest.DB instance at 0x20a1518>
|
||||
|
||||
a/test_db.py:2: AssertionError
|
||||
_________________________________ test_a2 __________________________________
|
||||
|
||||
db = <conftest.DB instance at 0x2304248>
|
||||
db = <conftest.DB instance at 0x20a1518>
|
||||
|
||||
def test_a2(db):
|
||||
> assert 0, db # to show value
|
||||
E AssertionError: <conftest.DB instance at 0x2304248>
|
||||
E AssertionError: <conftest.DB instance at 0x20a1518>
|
||||
|
||||
a/test_db2.py:2: AssertionError
|
||||
========== 3 failed, 2 passed, 1 xfailed, 1 error in 0.03 seconds ==========
|
||||
@@ -553,7 +553,7 @@ and run them::
|
||||
|
||||
$ py.test test_module.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 2 items
|
||||
|
||||
test_module.py FF
|
||||
@@ -561,7 +561,7 @@ and run them::
|
||||
================================= FAILURES =================================
|
||||
________________________________ test_fail1 ________________________________
|
||||
|
||||
tmpdir = local('/tmp/pytest-190/test_fail10')
|
||||
tmpdir = local('/tmp/pytest-281/test_fail10')
|
||||
|
||||
def test_fail1(tmpdir):
|
||||
> assert 0
|
||||
@@ -575,12 +575,12 @@ and run them::
|
||||
E assert 0
|
||||
|
||||
test_module.py:4: AssertionError
|
||||
========================= 2 failed in 0.01 seconds =========================
|
||||
========================= 2 failed in 0.02 seconds =========================
|
||||
|
||||
you will have a "failures" file which contains the failing test ids::
|
||||
|
||||
$ cat failures
|
||||
test_module.py::test_fail1 (/tmp/pytest-190/test_fail10)
|
||||
test_module.py::test_fail1 (/tmp/pytest-281/test_fail10)
|
||||
test_module.py::test_fail2
|
||||
|
||||
Making test result information available in fixtures
|
||||
@@ -643,7 +643,7 @@ and run it::
|
||||
|
||||
$ py.test -s test_module.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 3 items
|
||||
|
||||
test_module.py Esetting up a test failed! test_module.py::test_setup_fails
|
||||
@@ -676,7 +676,7 @@ and run it::
|
||||
E assert 0
|
||||
|
||||
test_module.py:15: AssertionError
|
||||
==================== 2 failed, 1 error in 0.02 seconds =====================
|
||||
==================== 2 failed, 1 error in 0.01 seconds =====================
|
||||
|
||||
You'll see that the fixture finalizers could use the precise reporting
|
||||
information.
|
||||
|
||||
@@ -76,7 +76,7 @@ marked ``smtp`` fixture function. Running the test looks like this::
|
||||
|
||||
$ py.test test_smtpsimple.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 1 items
|
||||
|
||||
test_smtpsimple.py F
|
||||
@@ -84,7 +84,7 @@ marked ``smtp`` fixture function. Running the test looks like this::
|
||||
================================= FAILURES =================================
|
||||
________________________________ test_ehlo _________________________________
|
||||
|
||||
smtp = <smtplib.SMTP instance at 0x2bb9d88>
|
||||
smtp = <smtplib.SMTP instance at 0x24a9950>
|
||||
|
||||
def test_ehlo(smtp):
|
||||
response, msg = smtp.ehlo()
|
||||
@@ -94,7 +94,7 @@ marked ``smtp`` fixture function. Running the test looks like this::
|
||||
E assert 0
|
||||
|
||||
test_smtpsimple.py:12: AssertionError
|
||||
========================= 1 failed in 0.18 seconds =========================
|
||||
========================= 1 failed in 0.21 seconds =========================
|
||||
|
||||
In the failure traceback we see that the test function was called with a
|
||||
``smtp`` argument, the ``smtplib.SMTP()`` instance created by the fixture
|
||||
@@ -194,7 +194,7 @@ inspect what is going on and can now run the tests::
|
||||
|
||||
$ py.test test_module.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 2 items
|
||||
|
||||
test_module.py FF
|
||||
@@ -202,7 +202,7 @@ inspect what is going on and can now run the tests::
|
||||
================================= FAILURES =================================
|
||||
________________________________ test_ehlo _________________________________
|
||||
|
||||
smtp = <smtplib.SMTP instance at 0x18f2fc8>
|
||||
smtp = <smtplib.SMTP instance at 0x138a290>
|
||||
|
||||
def test_ehlo(smtp):
|
||||
response = smtp.ehlo()
|
||||
@@ -214,7 +214,7 @@ inspect what is going on and can now run the tests::
|
||||
test_module.py:6: AssertionError
|
||||
________________________________ test_noop _________________________________
|
||||
|
||||
smtp = <smtplib.SMTP instance at 0x18f2fc8>
|
||||
smtp = <smtplib.SMTP instance at 0x138a290>
|
||||
|
||||
def test_noop(smtp):
|
||||
response = smtp.noop()
|
||||
@@ -223,7 +223,7 @@ inspect what is going on and can now run the tests::
|
||||
E assert 0
|
||||
|
||||
test_module.py:11: AssertionError
|
||||
========================= 2 failed in 0.16 seconds =========================
|
||||
========================= 2 failed in 0.19 seconds =========================
|
||||
|
||||
You see the two ``assert 0`` failing and more importantly you can also see
|
||||
that the same (module-scoped) ``smtp`` object was passed into the two
|
||||
@@ -234,7 +234,7 @@ quick as a single one because they reuse the same instance.
|
||||
If you decide that you rather want to have a session-scoped ``smtp``
|
||||
instance, you can simply declare it::
|
||||
|
||||
@pytest.fixture(scope=``session``)
|
||||
@pytest.fixture(scope="session")
|
||||
def smtp(...):
|
||||
# the returned fixture value will be shared for
|
||||
# all tests needing it
|
||||
@@ -271,7 +271,7 @@ Let's execute it::
|
||||
$ py.test -s -q --tb=no
|
||||
FFteardown smtp
|
||||
|
||||
2 failed in 0.15 seconds
|
||||
2 failed in 0.24 seconds
|
||||
|
||||
We see that the ``smtp`` instance is finalized after the two
|
||||
tests finished execution. Note that if we decorated our fixture
|
||||
@@ -312,7 +312,7 @@ again, nothing much has changed::
|
||||
|
||||
$ py.test -s -q --tb=no
|
||||
FF
|
||||
2 failed in 0.16 seconds
|
||||
2 failed in 0.23 seconds
|
||||
|
||||
Let's quickly create another test module that actually sets the
|
||||
server URL in its module namespace::
|
||||
@@ -379,7 +379,7 @@ So let's just do another run::
|
||||
================================= FAILURES =================================
|
||||
__________________________ test_ehlo[merlinux.eu] __________________________
|
||||
|
||||
smtp = <smtplib.SMTP instance at 0x2662290>
|
||||
smtp = <smtplib.SMTP instance at 0x15f7998>
|
||||
|
||||
def test_ehlo(smtp):
|
||||
response = smtp.ehlo()
|
||||
@@ -391,7 +391,7 @@ So let's just do another run::
|
||||
test_module.py:6: AssertionError
|
||||
__________________________ test_noop[merlinux.eu] __________________________
|
||||
|
||||
smtp = <smtplib.SMTP instance at 0x2662290>
|
||||
smtp = <smtplib.SMTP instance at 0x15f7998>
|
||||
|
||||
def test_noop(smtp):
|
||||
response = smtp.noop()
|
||||
@@ -402,7 +402,7 @@ So let's just do another run::
|
||||
test_module.py:11: AssertionError
|
||||
________________________ test_ehlo[mail.python.org] ________________________
|
||||
|
||||
smtp = <smtplib.SMTP instance at 0x26c2dd0>
|
||||
smtp = <smtplib.SMTP instance at 0x16535f0>
|
||||
|
||||
def test_ehlo(smtp):
|
||||
response = smtp.ehlo()
|
||||
@@ -411,9 +411,11 @@ So let's just do another run::
|
||||
E assert 'merlinux' in 'mail.python.org\nSIZE 25600000\nETRN\nSTARTTLS\nENHANCEDSTATUSCODES\n8BITMIME\nDSN'
|
||||
|
||||
test_module.py:5: AssertionError
|
||||
----------------------------- Captured stdout ------------------------------
|
||||
finalizing <smtplib.SMTP instance at 0x15f7998>
|
||||
________________________ test_noop[mail.python.org] ________________________
|
||||
|
||||
smtp = <smtplib.SMTP instance at 0x26c2dd0>
|
||||
smtp = <smtplib.SMTP instance at 0x16535f0>
|
||||
|
||||
def test_noop(smtp):
|
||||
response = smtp.noop()
|
||||
@@ -422,7 +424,7 @@ So let's just do another run::
|
||||
E assert 0
|
||||
|
||||
test_module.py:11: AssertionError
|
||||
4 failed in 6.32 seconds
|
||||
4 failed in 6.30 seconds
|
||||
|
||||
We see that our two test functions each ran twice, against the different
|
||||
``smtp`` instances. Note also, that with the ``mail.python.org``
|
||||
@@ -462,13 +464,13 @@ Here we declare an ``app`` fixture which receives the previously defined
|
||||
|
||||
$ py.test -v test_appsetup.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
collecting ... collected 2 items
|
||||
|
||||
test_appsetup.py:12: test_smtp_exists[mail.python.org] PASSED
|
||||
test_appsetup.py:12: test_smtp_exists[merlinux.eu] PASSED
|
||||
test_appsetup.py:12: test_smtp_exists[mail.python.org] PASSED
|
||||
|
||||
========================= 2 passed in 5.75 seconds =========================
|
||||
========================= 2 passed in 5.63 seconds =========================
|
||||
|
||||
Due to the parametrization of ``smtp`` the test will run twice with two
|
||||
different ``App`` instances and respective smtp servers. There is no
|
||||
@@ -526,7 +528,7 @@ Let's run the tests in verbose mode and with looking at the print-output::
|
||||
|
||||
$ py.test -v -s test_module.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0 -- /home/hpk/p/pytest/.tox/regen/bin/python
|
||||
collecting ... collected 8 items
|
||||
|
||||
test_module.py:15: test_0[1] test0 1
|
||||
|
||||
@@ -23,7 +23,7 @@ Installation options::
|
||||
To check your installation has installed the correct version::
|
||||
|
||||
$ py.test --version
|
||||
This is py.test version 2.4.2, imported from /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/pytest.pyc
|
||||
This is py.test version 2.5.0, imported from /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/pytest.pyc
|
||||
|
||||
If you get an error checkout :ref:`installation issues`.
|
||||
|
||||
@@ -45,7 +45,7 @@ That's it. You can execute the test function now::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 1 items
|
||||
|
||||
test_sample.py F
|
||||
@@ -123,7 +123,7 @@ run the module by passing its filename::
|
||||
================================= FAILURES =================================
|
||||
____________________________ TestClass.test_two ____________________________
|
||||
|
||||
self = <test_class.TestClass instance at 0x1e1f518>
|
||||
self = <test_class.TestClass instance at 0x2a8fef0>
|
||||
|
||||
def test_two(self):
|
||||
x = "hello"
|
||||
@@ -159,7 +159,7 @@ before performing the test function call. Let's just run it::
|
||||
================================= FAILURES =================================
|
||||
_____________________________ test_needsfiles ______________________________
|
||||
|
||||
tmpdir = local('/tmp/pytest-186/test_needsfiles0')
|
||||
tmpdir = local('/tmp/pytest-277/test_needsfiles0')
|
||||
|
||||
def test_needsfiles(tmpdir):
|
||||
print tmpdir
|
||||
@@ -168,7 +168,7 @@ before performing the test function call. Let's just run it::
|
||||
|
||||
test_tmpdir.py:3: AssertionError
|
||||
----------------------------- Captured stdout ------------------------------
|
||||
/tmp/pytest-186/test_needsfiles0
|
||||
/tmp/pytest-277/test_needsfiles0
|
||||
1 failed in 0.01 seconds
|
||||
|
||||
Before the test runs, a unique-per-test-invocation temporary directory
|
||||
|
||||
@@ -8,24 +8,151 @@ Good Integration Practises
|
||||
Work with virtual environments
|
||||
-----------------------------------------------------------
|
||||
|
||||
We recommend to use virtualenv_ environments and use easy_install_
|
||||
(or pip_) for installing your application dependencies as well as
|
||||
the ``pytest`` package itself. This way you will get a much more reproducible
|
||||
environment. A good tool to help you automate test runs against multiple
|
||||
dependency configurations or Python interpreters is `tox`_.
|
||||
We recommend to use virtualenv_ environments and use pip_
|
||||
(or easy_install_) for installing your application and any dependencies
|
||||
as well as the ``pytest`` package itself. This way you will get an isolated
|
||||
and reproducible environment. Given you have installed virtualenv_
|
||||
and execute it from the command line, here is an example session for unix
|
||||
or windows::
|
||||
|
||||
virtualenv . # create a virtualenv directory in the current directory
|
||||
|
||||
source bin/activate # on unix
|
||||
|
||||
scripts/activate # on Windows
|
||||
|
||||
We can now install pytest::
|
||||
|
||||
pip install pytest
|
||||
|
||||
Due to the ``activate`` step above the ``pip`` will come from
|
||||
the virtualenv directory and install any package into the isolated
|
||||
virtual environment.
|
||||
|
||||
Choosing a test layout / import rules
|
||||
------------------------------------------
|
||||
|
||||
py.test supports two common test layouts:
|
||||
|
||||
* putting tests into an extra directory outside your actual application
|
||||
code, useful if you have many functional tests or for other reasons
|
||||
want to keep tests separate from actual application code (often a good
|
||||
idea)::
|
||||
|
||||
setup.py # your distutils/setuptools Python package metadata
|
||||
mypkg/
|
||||
__init__.py
|
||||
appmodule.py
|
||||
tests/
|
||||
test_app.py
|
||||
...
|
||||
|
||||
|
||||
* inlining test directories into your application package, useful if you
|
||||
have direct relation between (unit-)test and application modules and
|
||||
want to distribute your tests along with your application::
|
||||
|
||||
setup.py # your distutils/setuptools Python package metadata
|
||||
mypkg/
|
||||
__init__.py
|
||||
appmodule.py
|
||||
...
|
||||
test/
|
||||
test_app.py
|
||||
...
|
||||
|
||||
Important notes relating to both schemes:
|
||||
|
||||
- **make sure that "mypkg" is importable**, for example by typing once::
|
||||
|
||||
pip install -e . # install package using setup.py in editable mode
|
||||
|
||||
- **avoid "__init__.py" files in your test directories**.
|
||||
This way your tests can run easily against an installed version
|
||||
of ``mypkg``, independently from if the installed version contains
|
||||
the tests or not.
|
||||
|
||||
- With inlined tests you might put ``__init__.py`` into test
|
||||
directories and make them installable as part of your application.
|
||||
Using the ``py.test --pyargs mypkg`` invocation pytest will
|
||||
discover where mypkg is installed and collect tests from there.
|
||||
With the "external" test you can still distribute tests but they
|
||||
will not be installed or become importable.
|
||||
|
||||
Typically you can run tests by pointing to test directories or modules::
|
||||
|
||||
py.test tests/test_app.py # for external test dirs
|
||||
py.test mypkg/test/test_app.py # for inlined test dirs
|
||||
py.test mypkg # run tests in all below test directories
|
||||
py.test # run all tests below current dir
|
||||
...
|
||||
|
||||
Because of the above ``editable install`` mode you can change your
|
||||
source code (both tests and the app) and rerun tests at will.
|
||||
Once you are done with your work, you can `use tox`_ to make sure
|
||||
that the package is really correct and tests pass in all
|
||||
required configurations.
|
||||
|
||||
.. note::
|
||||
|
||||
You can use Python3 namespace packages (PEP420) for your application
|
||||
but pytest will still perform `package name`_ discovery based on the
|
||||
presence of ``__init__.py`` files. If you use one of the above
|
||||
two recommended file system layouts but leave away the ``__init__.py``
|
||||
files it should just work on Python3.3 and above. When using
|
||||
"inlined tests", however, you will need to use absolute imports for
|
||||
getting at your application code because the test modules will be
|
||||
imported directly, without any application context. The latter allows
|
||||
your tests to run against an installed version of your package.
|
||||
|
||||
.. _`package name`:
|
||||
|
||||
.. note::
|
||||
|
||||
If py.test finds a "a/b/test_module.py" test file while
|
||||
recursing into the filesystem it determines the import name
|
||||
as follows:
|
||||
|
||||
* determine ``basedir``: this is the first "upward" (towards the root)
|
||||
directory not containing an ``__init__.py``. If e.g. both ``a``
|
||||
and ``b`` contain an ``__init__.py`` file then the parent directory
|
||||
of ``a`` will become the ``basedir``.
|
||||
|
||||
* perform ``sys.path.insert(0, basedir)`` to make the test module
|
||||
importable under the fully qualified import name.
|
||||
|
||||
* ``import a.b.test_module`` where the path is determined
|
||||
by converting path separators ``/`` into "." characters. This means
|
||||
you must follow the convention of having directory and file
|
||||
names map directly to the import names.
|
||||
|
||||
The reason for this somewhat evolved importing technique is
|
||||
that in larger projects multiple test modules might import
|
||||
from each other and thus deriving a canonical import name helps
|
||||
to avoid surprises such as a test modules getting imported twice.
|
||||
|
||||
|
||||
.. _`virtualenv`: http://pypi.python.org/pypi/virtualenv
|
||||
.. _`buildout`: http://www.buildout.org/
|
||||
.. _pip: http://pypi.python.org/pypi/pip
|
||||
|
||||
.. _`use tox`:
|
||||
|
||||
Use tox and Continuous Integration servers
|
||||
-------------------------------------------------
|
||||
|
||||
If you frequently release code to the public you
|
||||
may want to look into `tox`_, the virtualenv test automation
|
||||
tool and its `pytest support <http://testrun.org/tox/latest/example/pytest.html>`_.
|
||||
The basic idea is to generate a JUnitXML file through the ``--junitxml=PATH`` option and have a continuous integration server like Jenkins_ pick it up
|
||||
and generate reports.
|
||||
If you frequently release code and want to make sure that your actual
|
||||
package passes all tests you may want to look into `tox`_, the
|
||||
virtualenv test automation tool and its `pytest support
|
||||
<http://testrun.org/tox/latest/example/pytest.html>`_.
|
||||
Tox helps you to setup virtualenv environments with pre-defined
|
||||
dependencies and then executing a pre-configured test command with
|
||||
options. It will run tests against the installed package and not
|
||||
against your source code checkout, helping to detect packaging
|
||||
glitches.
|
||||
|
||||
If you want to use Jenkins_ you can use the ``--junitxml=PATH`` option
|
||||
to create a JUnitXML file that Jenkins_ can pick up and generate reports.
|
||||
|
||||
.. _standalone:
|
||||
.. _`genscript method`:
|
||||
@@ -33,21 +160,19 @@ and generate reports.
|
||||
Create a py.test standalone script
|
||||
-------------------------------------------
|
||||
|
||||
If you are a maintainer or application developer and want others
|
||||
to easily run tests you can generate a completely standalone "py.test"
|
||||
script::
|
||||
If you are a maintainer or application developer and want people
|
||||
who don't deal with python much to easily run tests you may generate
|
||||
a standalone "py.test" script::
|
||||
|
||||
py.test --genscript=runtests.py
|
||||
|
||||
generates a ``runtests.py`` script which is a fully functional basic
|
||||
This generates a ``runtests.py`` script which is a fully functional basic
|
||||
``py.test`` script, running unchanged under Python2 and Python3.
|
||||
You can tell people to download the script and then e.g. run it like this::
|
||||
|
||||
python runtests.py
|
||||
|
||||
|
||||
|
||||
|
||||
Integrating with distutils / ``python setup.py test``
|
||||
--------------------------------------------------------
|
||||
|
||||
@@ -93,8 +218,9 @@ options.
|
||||
Integration with setuptools test commands
|
||||
----------------------------------------------------
|
||||
|
||||
Setuptools supports writing our own Test command for invoking
|
||||
pytest::
|
||||
Setuptools supports writing our own Test command for invoking pytest.
|
||||
Most often it is better to use tox_ instead, but here is how you can
|
||||
get started with setuptools integration::
|
||||
|
||||
from setuptools.command.test import test as TestCommand
|
||||
import sys
|
||||
@@ -143,69 +269,4 @@ For examples of how to customize your test discovery :doc:`example/pythoncollect
|
||||
Within Python modules, py.test also discovers tests using the standard
|
||||
:ref:`unittest.TestCase <unittest.TestCase>` subclassing technique.
|
||||
|
||||
Choosing a test layout / import rules
|
||||
------------------------------------------
|
||||
|
||||
py.test supports common test layouts:
|
||||
|
||||
* inlining test directories into your application package, useful if you want to
|
||||
keep (unit) tests and actually tested code close together::
|
||||
|
||||
mypkg/
|
||||
__init__.py
|
||||
appmodule.py
|
||||
...
|
||||
test/
|
||||
test_app.py
|
||||
...
|
||||
|
||||
* putting tests into an extra directory outside your actual application
|
||||
code, useful if you have many functional tests or want to keep
|
||||
tests separate from actual application code::
|
||||
|
||||
mypkg/
|
||||
__init__.py
|
||||
appmodule.py
|
||||
tests/
|
||||
test_app.py
|
||||
...
|
||||
|
||||
In both cases you usually need to make sure that ``mypkg`` is importable,
|
||||
for example by using the setuptools ``python setup.py develop`` method.
|
||||
|
||||
You can run your tests by pointing to it::
|
||||
|
||||
py.test tests/test_app.py # for external test dirs
|
||||
py.test mypkg/test/test_app.py # for inlined test dirs
|
||||
py.test mypkg # run tests in all below test directories
|
||||
py.test # run all tests below current dir
|
||||
...
|
||||
|
||||
.. _`package name`:
|
||||
|
||||
.. note::
|
||||
|
||||
If py.test finds a "a/b/test_module.py" test file while
|
||||
recursing into the filesystem it determines the import name
|
||||
as follows:
|
||||
|
||||
* find ``basedir`` -- this is the first "upward" (towards the root)
|
||||
directory not containing an ``__init__.py``. If both the ``a``
|
||||
and ``b`` directories contain an ``__init__.py`` the basedir will
|
||||
be the parent dir of ``a``.
|
||||
|
||||
* perform ``sys.path.insert(0, basedir)`` to make the test module
|
||||
importable under the fully qualified import name.
|
||||
|
||||
* ``import a.b.test_module`` where the path is determined
|
||||
by converting path separators ``/`` into "." characters. This means
|
||||
you must follow the convention of having directory and file
|
||||
names map directly to the import names.
|
||||
|
||||
The reason for this somewhat evolved importing technique is
|
||||
that in larger projects multiple test modules might import
|
||||
from each other and thus deriving a canonical import name helps
|
||||
to avoid surprises such as a test modules getting imported twice.
|
||||
|
||||
|
||||
.. include:: links.inc
|
||||
|
||||
@@ -8,7 +8,7 @@ pytest: helps you write better programs
|
||||
|
||||
**a mature full-featured Python testing tool**
|
||||
|
||||
- runs on Posix/Windows, Python 2.4-3.3, PyPy and Jython-2.5.1
|
||||
- runs on Posix/Windows, Python 2.5-3.3, PyPy and Jython-2.5.1
|
||||
- :ref:`comprehensive online <toc>` and `PDF documentation <pytest.pdf>`_
|
||||
- many :ref:`third party plugins <extplugins>` and
|
||||
:ref:`builtin helpers <pytest helpers>`
|
||||
|
||||
@@ -30,9 +30,25 @@ Supported nose Idioms
|
||||
Unsupported idioms / known issues
|
||||
----------------------------------
|
||||
|
||||
- unittest-style ``setUp, tearDown, setUpClass, tearDownClass``
|
||||
are recognized only on ``unittest.TestCase`` classes but not
|
||||
on plain classes. ``nose`` supports these methods also on plain
|
||||
classes but pytest deliberately does not. As nose and pytest already
|
||||
both support ``setup_class, teardown_class, setup_method, teardown_method``
|
||||
it doesn't seem useful to duplicate the unittest-API like nose does.
|
||||
If you however rather think pytest should support the unittest-spelling on
|
||||
plain classes please post `to this issue
|
||||
<https://bitbucket.org/hpk42/pytest/issue/377/>`_.
|
||||
|
||||
- nose imports test modules with the same import path (e.g.
|
||||
``tests.test_mod``) but different file system paths
|
||||
(e.g. ``tests/test_mode.py`` and ``other/tests/test_mode.py``)
|
||||
by extending sys.path/import semantics. pytest does not do that
|
||||
but there is discussion in `issue268 <https://bitbucket.org/hpk42/pytest/issue/268>`_ for adding some support. Note that
|
||||
`nose2 choose to avoid this sys.path/import hackery <https://nose2.readthedocs.org/en/latest/differences.html#test-discovery-and-loading>`_.
|
||||
|
||||
- nose-style doctests are not collected and executed correctly,
|
||||
also doctest fixtures don't work.
|
||||
|
||||
- no nose-configuration is recognized
|
||||
|
||||
|
||||
|
||||
@@ -53,7 +53,7 @@ them in turn::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 3 items
|
||||
|
||||
test_expectation.py ..F
|
||||
@@ -100,7 +100,7 @@ Let's run this::
|
||||
|
||||
$ py.test
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 3 items
|
||||
|
||||
test_expectation.py ..x
|
||||
@@ -170,8 +170,8 @@ Let's also run with a stringinput that will lead to a failing test::
|
||||
|
||||
def test_valid_string(stringinput):
|
||||
> assert stringinput.isalpha()
|
||||
E assert <built-in method isalpha of str object at 0x2ac85b043198>()
|
||||
E + where <built-in method isalpha of str object at 0x2ac85b043198> = '!'.isalpha
|
||||
E assert <built-in method isalpha of str object at 0x2b4b17865198>()
|
||||
E + where <built-in method isalpha of str object at 0x2b4b17865198> = '!'.isalpha
|
||||
|
||||
test_strings.py:3: AssertionError
|
||||
1 failed in 0.01 seconds
|
||||
@@ -185,7 +185,7 @@ listlist::
|
||||
$ py.test -q -rs test_strings.py
|
||||
s
|
||||
========================= short test summary info ==========================
|
||||
SKIP [1] /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:1024: got empty parameter set, function test_valid_string at /tmp/doc-exec-561/test_strings.py:1
|
||||
SKIP [1] /home/hpk/p/pytest/.tox/regen/local/lib/python2.7/site-packages/_pytest/python.py:1087: got empty parameter set, function test_valid_string at /tmp/doc-exec-24/test_strings.py:1
|
||||
1 skipped in 0.01 seconds
|
||||
|
||||
For further examples, you might want to look at :ref:`more
|
||||
|
||||
193
doc/en/plugins_index/plugins_index.py
Normal file
193
doc/en/plugins_index/plugins_index.py
Normal file
@@ -0,0 +1,193 @@
|
||||
'''
|
||||
Script to generate the file `plugins_index.txt` with information about pytest plugins taken directly
|
||||
from a live PyPI server.
|
||||
|
||||
This will evolve to include test compatibility (pythons and pytest versions) information also.
|
||||
'''
|
||||
from collections import namedtuple
|
||||
import datetime
|
||||
from distutils.version import LooseVersion
|
||||
import itertools
|
||||
from optparse import OptionParser
|
||||
import os
|
||||
import sys
|
||||
import xmlrpclib
|
||||
|
||||
import pytest
|
||||
|
||||
#===================================================================================================
|
||||
# iter_plugins
|
||||
#===================================================================================================
|
||||
def iter_plugins(client, search='pytest-'):
|
||||
'''
|
||||
Returns an iterator of (name, version) from PyPI.
|
||||
|
||||
:param client: xmlrpclib.ServerProxy
|
||||
:param search: package names to search for
|
||||
'''
|
||||
for plug_data in client.search({'name' : search}):
|
||||
yield plug_data['name'], plug_data['version']
|
||||
|
||||
|
||||
#===================================================================================================
|
||||
# get_latest_versions
|
||||
#===================================================================================================
|
||||
def get_latest_versions(plugins):
|
||||
'''
|
||||
Returns an iterator of (name, version) from the given list of (name, version), but returning
|
||||
only the latest version of the package. Uses distutils.LooseVersion to ensure compatibility
|
||||
with PEP386.
|
||||
'''
|
||||
plugins = [(name, LooseVersion(version)) for (name, version) in plugins]
|
||||
for name, grouped_plugins in itertools.groupby(plugins, key=lambda x: x[0]):
|
||||
name, loose_version = list(grouped_plugins)[-1]
|
||||
yield name, str(loose_version)
|
||||
|
||||
|
||||
#===================================================================================================
|
||||
# obtain_plugins_table
|
||||
#===================================================================================================
|
||||
def obtain_plugins_table(plugins, client):
|
||||
'''
|
||||
Returns information to populate a table of plugins, their versions, authors, etc.
|
||||
|
||||
The returned information is a list of columns of `ColumnData` namedtuples(text, link). Link
|
||||
can be None if the text for that column should not be linked to anything.
|
||||
|
||||
:param plugins: list of (name, version)
|
||||
:param client: xmlrpclib.ServerProxy
|
||||
'''
|
||||
rows = []
|
||||
ColumnData = namedtuple('ColumnData', 'text link')
|
||||
headers = ['Name', 'Author', 'Downloads', 'Python 2.7', 'Python 3.3', 'Summary']
|
||||
pytest_version = pytest.__version__
|
||||
plugins = list(plugins)
|
||||
for index, (package_name, version) in enumerate(plugins):
|
||||
print package_name, version, '...',
|
||||
|
||||
release_data = client.release_data(package_name, version)
|
||||
download_count = release_data['downloads']['last_month']
|
||||
image_url = '.. image:: http://pytest-plugs.herokuapp.com/status/{name}-{version}'.format(name=package_name,
|
||||
version=version)
|
||||
image_url += '?py={py}&pytest={pytest}'
|
||||
row = (
|
||||
ColumnData(package_name + '-' + version, release_data['release_url']),
|
||||
ColumnData(release_data['author'], release_data['author_email']),
|
||||
ColumnData(str(download_count), None),
|
||||
ColumnData(image_url.format(py='py27', pytest=pytest_version), None),
|
||||
ColumnData(image_url.format(py='py33', pytest=pytest_version), None),
|
||||
ColumnData(release_data['summary'], None),
|
||||
)
|
||||
assert len(row) == len(headers)
|
||||
rows.append(row)
|
||||
|
||||
print 'OK (%d%%)' % ((index + 1) * 100 / len(plugins))
|
||||
|
||||
return headers, rows
|
||||
|
||||
|
||||
#===================================================================================================
|
||||
# generate_plugins_index_from_table
|
||||
#===================================================================================================
|
||||
def generate_plugins_index_from_table(filename, headers, rows):
|
||||
'''
|
||||
Generates a RST file with the table data given.
|
||||
|
||||
:param filename: output filename
|
||||
:param headers: see `obtain_plugins_table`
|
||||
:param rows: see `obtain_plugins_table`
|
||||
'''
|
||||
# creates a list of rows, each being a str containing appropriate column text and link
|
||||
table_texts = []
|
||||
for row in rows:
|
||||
column_texts = []
|
||||
for i, col_data in enumerate(row):
|
||||
text = '`%s <%s>`_' % (col_data.text, col_data.link) if col_data.link else col_data.text
|
||||
column_texts.append(text)
|
||||
table_texts.append(column_texts)
|
||||
|
||||
# compute max length of each column so we can build the rst table
|
||||
column_lengths = [len(x) for x in headers]
|
||||
for column_texts in table_texts:
|
||||
for i, row_text in enumerate(column_texts):
|
||||
column_lengths[i] = max(column_lengths[i], len(row_text) + 2)
|
||||
|
||||
def get_row_limiter(char):
|
||||
return ' '.join(char * length for length in column_lengths)
|
||||
|
||||
with file(filename, 'w') as f:
|
||||
# write welcome
|
||||
print >> f, '.. _plugins_index:'
|
||||
print >> f
|
||||
print >> f, 'List of Third-Party Plugins'
|
||||
print >> f, '==========================='
|
||||
print >> f
|
||||
|
||||
# table
|
||||
print >> f, get_row_limiter('=')
|
||||
for i, header in enumerate(headers):
|
||||
print >> f, '{:^{fill}}'.format(header, fill=column_lengths[i]),
|
||||
print >> f
|
||||
print >> f, get_row_limiter('=')
|
||||
|
||||
for column_texts in table_texts:
|
||||
for i, row_text in enumerate(column_texts):
|
||||
print >> f, '{:^{fill}}'.format(row_text, fill=column_lengths[i]),
|
||||
print >> f
|
||||
print >> f
|
||||
print >> f, get_row_limiter('=')
|
||||
print >> f
|
||||
print >> f, '*(Downloads are given from last month only)*'
|
||||
print >> f
|
||||
print >> f, '*(Updated on %s)*' % _get_today_as_str()
|
||||
|
||||
|
||||
#===================================================================================================
|
||||
# _get_today_as_str
|
||||
#===================================================================================================
|
||||
def _get_today_as_str():
|
||||
'''
|
||||
internal. only exists so we can patch it in testing.
|
||||
'''
|
||||
return datetime.date.today().strftime('%Y-%m-%d')
|
||||
|
||||
|
||||
#===================================================================================================
|
||||
# generate_plugins_index
|
||||
#===================================================================================================
|
||||
def generate_plugins_index(client, filename):
|
||||
'''
|
||||
Generates an RST file with a table of the latest pytest plugins found in PyPI.
|
||||
|
||||
:param client: xmlrpclib.ServerProxy
|
||||
:param filename: output filename
|
||||
'''
|
||||
plugins = get_latest_versions(iter_plugins(client))
|
||||
headers, rows = obtain_plugins_table(plugins, client)
|
||||
generate_plugins_index_from_table(filename, headers, rows)
|
||||
|
||||
|
||||
#===================================================================================================
|
||||
# main
|
||||
#===================================================================================================
|
||||
def main(argv):
|
||||
filename = os.path.join(os.path.dirname(__file__), 'plugins_index.txt')
|
||||
url = 'http://pypi.python.org/pypi'
|
||||
|
||||
parser = OptionParser(description='Generates a restructured document of pytest plugins from PyPI')
|
||||
parser.add_option('-f', '--filename', default=filename, help='output filename [default: %default]')
|
||||
parser.add_option('-u', '--url', default=url, help='url of PyPI server to obtain data from [default: %default]')
|
||||
(options, _) = parser.parse_args(argv[1:])
|
||||
|
||||
client = xmlrpclib.ServerProxy(options.url)
|
||||
generate_plugins_index(client, options.filename)
|
||||
|
||||
print
|
||||
print '%s Updated.' % options.filename
|
||||
return 0
|
||||
|
||||
#===================================================================================================
|
||||
# main
|
||||
#===================================================================================================
|
||||
if __name__ == '__main__':
|
||||
sys.exit(main(sys.argv))
|
||||
64
doc/en/plugins_index/plugins_index.txt
Normal file
64
doc/en/plugins_index/plugins_index.txt
Normal file
@@ -0,0 +1,64 @@
|
||||
.. _plugins_index:
|
||||
|
||||
List of Third-Party Plugins
|
||||
===========================
|
||||
|
||||
========================================================================================== ==================================================================================== ========= ====================================================================================================== ====================================================================================================== =============================================================================================================================================
|
||||
Name Author Downloads Python 2.7 Python 3.3 Summary
|
||||
========================================================================================== ==================================================================================== ========= ====================================================================================================== ====================================================================================================== =============================================================================================================================================
|
||||
`pytest-bdd-0.6.7 <http://pypi.python.org/pypi/pytest-bdd/0.6.7>`_ `Oleg Pidsadnyi <oleg.podsadny@gmail.com>`_ 1467 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-bdd-0.6.7?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-bdd-0.6.7?py=py33&pytest=2.4.2 BDD for pytest
|
||||
`pytest-bdd-splinter-0.5.96 <http://pypi.python.org/pypi/pytest-bdd-splinter/0.5.96>`_ `Oleg Pidsadnyi <oleg.podsadny@gmail.com>`_ 3352 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-bdd-splinter-0.5.96?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-bdd-splinter-0.5.96?py=py33&pytest=2.4.2 Splinter subplugin for Pytest BDD plugin
|
||||
`pytest-bench-0.2.5 <http://pypi.python.org/pypi/pytest-bench/0.2.5>`_ `Concordus Applications <support@concordusapps.com>`_ 1560 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-bench-0.2.5?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-bench-0.2.5?py=py33&pytest=2.4.2 Benchmark utility that plugs into pytest.
|
||||
`pytest-blockage-0.1 <http://pypi.python.org/pypi/pytest-blockage/0.1>`_ `UNKNOWN <UNKNOWN>`_ 102 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-blockage-0.1?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-blockage-0.1?py=py33&pytest=2.4.2 Disable network requests during a test run.
|
||||
`pytest-browsermob-proxy-0.1 <http://pypi.python.org/pypi/pytest-browsermob-proxy/0.1>`_ `Dave Hunt <dhunt@mozilla.com>`_ 55 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-browsermob-proxy-0.1?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-browsermob-proxy-0.1?py=py33&pytest=2.4.2 BrowserMob proxy plugin for py.test.
|
||||
`pytest-bugzilla-0.2 <http://pypi.python.org/pypi/pytest-bugzilla/0.2>`_ `Noufal Ibrahim <noufal@nibrahim.net.in>`_ 89 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-bugzilla-0.2?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-bugzilla-0.2?py=py33&pytest=2.4.2 py.test bugzilla integration plugin
|
||||
`pytest-cache-1.0 <http://pypi.python.org/pypi/pytest-cache/1.0>`_ `Holger Krekel <holger.krekel@gmail.com>`_ 5561 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-cache-1.0?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-cache-1.0?py=py33&pytest=2.4.2 pytest plugin with mechanisms for caching across test runs
|
||||
`pytest-capturelog-0.7 <http://pypi.python.org/pypi/pytest-capturelog/0.7>`_ `Meme Dough <memedough@gmail.com>`_ 1553 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-capturelog-0.7?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-capturelog-0.7?py=py33&pytest=2.4.2 py.test plugin to capture log messages
|
||||
`pytest-codecheckers-0.2 <http://pypi.python.org/pypi/pytest-codecheckers/0.2>`_ `Ronny Pfannschmidt <Ronny.Pfannschmidt@gmx.de>`_ 384 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-codecheckers-0.2?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-codecheckers-0.2?py=py33&pytest=2.4.2 pytest plugin to add source code sanity checks (pep8 and friends)
|
||||
`pytest-contextfixture-0.1.1 <http://pypi.python.org/pypi/pytest-contextfixture/0.1.1>`_ `Andreas Pelme <andreas@pelme.se>`_ 92 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-contextfixture-0.1.1?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-contextfixture-0.1.1?py=py33&pytest=2.4.2 Define pytest fixtures as context managers.
|
||||
`pytest-couchdbkit-0.5.1 <http://pypi.python.org/pypi/pytest-couchdbkit/0.5.1>`_ `RonnyPfannschmidt <ronny.pfannschmidt@gmx.de>`_ 200 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-couchdbkit-0.5.1?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-couchdbkit-0.5.1?py=py33&pytest=2.4.2 py.test extension for per-test couchdb databases using couchdbkit
|
||||
`pytest-cov-1.6 <http://pypi.python.org/pypi/pytest-cov/1.6>`_ `Meme Dough <memedough@gmail.com>`_ 23291 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-cov-1.6?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-cov-1.6?py=py33&pytest=2.4.2 py.test plugin for coverage reporting with support for both centralised and distributed testing, including subprocesses and multiprocessing
|
||||
`pytest-dbfixtures-0.4.0 <http://pypi.python.org/pypi/pytest-dbfixtures/0.4.0>`_ `Clearcode - The A Room <thearoom@clearcode.cc>`_ 6223 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-dbfixtures-0.4.0?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-dbfixtures-0.4.0?py=py33&pytest=2.4.2 dbfixtures plugin for py.test.
|
||||
`pytest-django-2.4 <http://pypi.python.org/pypi/pytest-django/2.4>`_ `Andreas Pelme <andreas@pelme.se>`_ 4809 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-django-2.4?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-django-2.4?py=py33&pytest=2.4.2 A Django plugin for py.test.
|
||||
`pytest-django-lite-0.1.0 <http://pypi.python.org/pypi/pytest-django-lite/0.1.0>`_ `David Cramer <dcramer@gmail.com>`_ 987 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-django-lite-0.1.0?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-django-lite-0.1.0?py=py33&pytest=2.4.2 The bare minimum to integrate py.test with Django.
|
||||
`pytest-figleaf-1.0 <http://pypi.python.org/pypi/pytest-figleaf/1.0>`_ `holger krekel <py-dev@codespeak.net,holger@merlinux.eu>`_ 53 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-figleaf-1.0?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-figleaf-1.0?py=py33&pytest=2.4.2 py.test figleaf coverage plugin
|
||||
`pytest-flakes-0.2 <http://pypi.python.org/pypi/pytest-flakes/0.2>`_ `Florian Schulze, Holger Krekel and Ronny Pfannschmidt <florian.schulze@gmx.net>`_ 1146 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-flakes-0.2?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-flakes-0.2?py=py33&pytest=2.4.2 pytest plugin to check source code with pyflakes
|
||||
`pytest-greendots-0.2 <http://pypi.python.org/pypi/pytest-greendots/0.2>`_ `UNKNOWN <UNKNOWN>`_ 139 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-greendots-0.2?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-greendots-0.2?py=py33&pytest=2.4.2 Green progress dots
|
||||
`pytest-growl-0.1 <http://pypi.python.org/pypi/pytest-growl/0.1>`_ `Anthony Long <antlong@gmail.com>`_ 58 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-growl-0.1?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-growl-0.1?py=py33&pytest=2.4.2 Growl notifications for pytest results.
|
||||
`pytest-incremental-0.3.0 <http://pypi.python.org/pypi/pytest-incremental/0.3.0>`_ `Eduardo Naufel Schettino <schettino72@gmail.com>`_ 180 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-incremental-0.3.0?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-incremental-0.3.0?py=py33&pytest=2.4.2 an incremental test runner (pytest plugin)
|
||||
`pytest-instafail-0.1.1 <http://pypi.python.org/pypi/pytest-instafail/0.1.1>`_ `Janne Vanhala <janne.vanhala@gmail.com>`_ 418 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-instafail-0.1.1?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-instafail-0.1.1?py=py33&pytest=2.4.2 py.test plugin to show failures instantly
|
||||
`pytest-ipdb-0.1-prerelease <http://pypi.python.org/pypi/pytest-ipdb/0.1-prerelease>`_ `Matthew de Verteuil <onceuponajooks@gmail.com>`_ 93 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-ipdb-0.1-prerelease?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-ipdb-0.1-prerelease?py=py33&pytest=2.4.2 A py.test plug-in to enable drop to ipdb debugger on test failure.
|
||||
`pytest-jira-0.01 <http://pypi.python.org/pypi/pytest-jira/0.01>`_ `James Laska <james.laska@gmail.com>`_ 86 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-jira-0.01?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-jira-0.01?py=py33&pytest=2.4.2 py.test JIRA integration plugin, using markers
|
||||
`pytest-konira-0.2 <http://pypi.python.org/pypi/pytest-konira/0.2>`_ `Alfredo Deza <alfredodeza [at] gmail.com>`_ 91 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-konira-0.2?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-konira-0.2?py=py33&pytest=2.4.2 Run Konira DSL tests with py.test
|
||||
`pytest-localserver-0.3.2 <http://pypi.python.org/pypi/pytest-localserver/0.3.2>`_ `Sebastian Rahlf <basti AT redtoad DOT de>`_ 448 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-localserver-0.3.2?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-localserver-0.3.2?py=py33&pytest=2.4.2 py.test plugin to test server connections locally.
|
||||
`pytest-marker-bugzilla-0.06 <http://pypi.python.org/pypi/pytest-marker-bugzilla/0.06>`_ `Eric Sammons <elsammons@gmail.com>`_ 191 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-marker-bugzilla-0.06?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-marker-bugzilla-0.06?py=py33&pytest=2.4.2 py.test bugzilla integration plugin, using markers
|
||||
`pytest-markfiltration-0.8 <http://pypi.python.org/pypi/pytest-markfiltration/0.8>`_ `adam goucher <adam@element34.ca>`_ 253 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-markfiltration-0.8?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-markfiltration-0.8?py=py33&pytest=2.4.2 UNKNOWN
|
||||
`pytest-marks-0.4 <http://pypi.python.org/pypi/pytest-marks/0.4>`_ `adam goucher <adam@element34.ca>`_ 225 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-marks-0.4?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-marks-0.4?py=py33&pytest=2.4.2 UNKNOWN
|
||||
`pytest-monkeyplus-1.1.0 <http://pypi.python.org/pypi/pytest-monkeyplus/1.1.0>`_ `Virgil Dupras <hsoft@hardcoded.net>`_ 123 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-monkeyplus-1.1.0?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-monkeyplus-1.1.0?py=py33&pytest=2.4.2 pytest's monkeypatch subclass with extra functionalities
|
||||
`pytest-mozwebqa-1.1.1 <http://pypi.python.org/pypi/pytest-mozwebqa/1.1.1>`_ `Dave Hunt <dhunt@mozilla.com>`_ 1037 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-mozwebqa-1.1.1?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-mozwebqa-1.1.1?py=py33&pytest=2.4.2 Mozilla WebQA plugin for py.test.
|
||||
`pytest-oerp-0.2.0 <http://pypi.python.org/pypi/pytest-oerp/0.2.0>`_ `Leonardo Santagada <santagada@gmail.com>`_ 144 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-oerp-0.2.0?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-oerp-0.2.0?py=py33&pytest=2.4.2 pytest plugin to test OpenERP modules
|
||||
`pytest-osxnotify-0.1.4 <http://pypi.python.org/pypi/pytest-osxnotify/0.1.4>`_ `Daniel Bader <mail@dbader.org>`_ 184 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-osxnotify-0.1.4?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-osxnotify-0.1.4?py=py33&pytest=2.4.2 OS X notifications for py.test results.
|
||||
`pytest-paste-config-0.1 <http://pypi.python.org/pypi/pytest-paste-config/0.1>`_ `UNKNOWN <UNKNOWN>`_ 164 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-paste-config-0.1?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-paste-config-0.1?py=py33&pytest=2.4.2 Allow setting the path to a paste config file
|
||||
`pytest-pep8-1.0.5 <http://pypi.python.org/pypi/pytest-pep8/1.0.5>`_ `Holger Krekel and Ronny Pfannschmidt <holger.krekel@gmail.com>`_ 5809 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-pep8-1.0.5?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-pep8-1.0.5?py=py33&pytest=2.4.2 pytest plugin to check PEP8 requirements
|
||||
`pytest-poo-0.2 <http://pypi.python.org/pypi/pytest-poo/0.2>`_ `Andreas Pelme <andreas@pelme.se>`_ 108 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-poo-0.2?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-poo-0.2?py=py33&pytest=2.4.2 Visualize your crappy tests
|
||||
`pytest-pydev-0.1 <http://pypi.python.org/pypi/pytest-pydev/0.1>`_ `Sebastian Rahlf <basti AT redtoad DOT de>`_ 100 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-pydev-0.1?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-pydev-0.1?py=py33&pytest=2.4.2 py.test plugin to connect to a remote debug server with PyDev or PyCharm.
|
||||
`pytest-qt-1.0.2 <http://pypi.python.org/pypi/pytest-qt/1.0.2>`_ `Bruno Oliveira <nicoddemus@gmail.com>`_ 129 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-qt-1.0.2?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-qt-1.0.2?py=py33&pytest=2.4.2 pytest plugin that adds fixtures for testing Qt (PyQt and PySide) applications.
|
||||
`pytest-quickcheck-0.8 <http://pypi.python.org/pypi/pytest-quickcheck/0.8>`_ `Tetsuya Morimoto <tetsuya dot morimoto at gmail dot com>`_ 345 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-quickcheck-0.8?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-quickcheck-0.8?py=py33&pytest=2.4.2 pytest plugin to generate random data inspired by QuickCheck
|
||||
`pytest-rage-0.1 <http://pypi.python.org/pypi/pytest-rage/0.1>`_ `Leonardo Santagada <santagada@gmail.com>`_ 56 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-rage-0.1?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-rage-0.1?py=py33&pytest=2.4.2 pytest plugin to implement PEP712
|
||||
`pytest-random-0.02 <http://pypi.python.org/pypi/pytest-random/0.02>`_ `Leah Klearman <lklrmn@gmail.com>`_ 116 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-random-0.02?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-random-0.02?py=py33&pytest=2.4.2 py.test plugin to randomize tests
|
||||
`pytest-rerunfailures-0.03 <http://pypi.python.org/pypi/pytest-rerunfailures/0.03>`_ `Leah Klearman <lklrmn@gmail.com>`_ 147 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-rerunfailures-0.03?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-rerunfailures-0.03?py=py33&pytest=2.4.2 py.test plugin to re-run tests to eliminate flakey failures
|
||||
`pytest-runfailed-0.3 <http://pypi.python.org/pypi/pytest-runfailed/0.3>`_ `Dimitri Merejkowsky <d.merej@gmail.com>`_ 88 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-runfailed-0.3?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-runfailed-0.3?py=py33&pytest=2.4.2 implement a --failed option for pytest
|
||||
`pytest-runner-2.0 <http://pypi.python.org/pypi/pytest-runner/2.0>`_ `Jason R. Coombs <jaraco@jaraco.com>`_ 5657 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-runner-2.0?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-runner-2.0?py=py33&pytest=2.4.2 UNKNOWN
|
||||
`pytest-sugar-0.2.2 <http://pypi.python.org/pypi/pytest-sugar/0.2.2>`_ `Teemu, Janne Vanhala <orkkiolento@gmail.com, janne.vanhala@gmail.com>`_ 348 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-sugar-0.2.2?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-sugar-0.2.2?py=py33&pytest=2.4.2 py.test plugin that adds instafail, ETA and neat graphics
|
||||
`pytest-timeout-0.3 <http://pypi.python.org/pypi/pytest-timeout/0.3>`_ `Floris Bruynooghe <flub@devork.be>`_ 4351 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-timeout-0.3?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-timeout-0.3?py=py33&pytest=2.4.2 pytest plugin to abort tests after a timeout
|
||||
`pytest-twisted-1.4 <http://pypi.python.org/pypi/pytest-twisted/1.4>`_ `Ralf Schmitt <ralf@brainbot.com>`_ 239 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-twisted-1.4?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-twisted-1.4?py=py33&pytest=2.4.2 A twisted plugin for py.test.
|
||||
`pytest-xdist-1.9 <http://pypi.python.org/pypi/pytest-xdist/1.9>`_ `holger krekel and contributors <pytest-dev@python.org,holger@merlinux.eu>`_ 7894 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-xdist-1.9?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-xdist-1.9?py=py33&pytest=2.4.2 py.test xdist plugin for distributed testing and loop-on-failing modes
|
||||
`pytest-xprocess-0.8 <http://pypi.python.org/pypi/pytest-xprocess/0.8>`_ `Holger Krekel <holger@merlinux.eu>`_ 96 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-xprocess-0.8?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-xprocess-0.8?py=py33&pytest=2.4.2 pytest plugin to manage external processes across test runs
|
||||
`pytest-yamlwsgi-0.6 <http://pypi.python.org/pypi/pytest-yamlwsgi/0.6>`_ `Ali Afshar <aafshar@gmail.com>`_ 194 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-yamlwsgi-0.6?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-yamlwsgi-0.6?py=py33&pytest=2.4.2 Run tests against wsgi apps defined in yaml
|
||||
`pytest-zap-0.1 <http://pypi.python.org/pypi/pytest-zap/0.1>`_ `Dave Hunt <dhunt@mozilla.com>`_ 63 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-zap-0.1?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-zap-0.1?py=py33&pytest=2.4.2 OWASP ZAP plugin for py.test.
|
||||
|
||||
========================================================================================== ==================================================================================== ========= ====================================================================================================== ====================================================================================================== =============================================================================================================================================
|
||||
|
||||
*(Downloads are given from last month only)*
|
||||
|
||||
*(Updated on 2013-12-11)*
|
||||
101
doc/en/plugins_index/test_plugins_index.py
Normal file
101
doc/en/plugins_index/test_plugins_index.py
Normal file
@@ -0,0 +1,101 @@
|
||||
import os
|
||||
import xmlrpclib
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
#===================================================================================================
|
||||
# test_plugins_index
|
||||
#===================================================================================================
|
||||
def test_plugins_index(tmpdir, monkeypatch):
|
||||
'''
|
||||
Blackbox testing for plugins_index script. Calls main() generating a file and compares produced
|
||||
output to expected.
|
||||
|
||||
.. note:: if the test fails, a file named `test_plugins_index.obtained` will be generated in
|
||||
the same directory as this test file. Ensure the contents are correct and overwrite
|
||||
the global `expected_output` with the new contents.
|
||||
'''
|
||||
import plugins_index
|
||||
|
||||
# dummy interface to xmlrpclib.ServerProxy
|
||||
class DummyProxy(object):
|
||||
|
||||
expected_url = 'http://dummy.pypi'
|
||||
def __init__(self, url):
|
||||
assert url == self.expected_url
|
||||
|
||||
def search(self, query):
|
||||
assert query == {'name' : 'pytest-'}
|
||||
return [
|
||||
{'name': 'pytest-plugin1', 'version' : '0.8'},
|
||||
{'name': 'pytest-plugin1', 'version' : '1.0'},
|
||||
{'name': 'pytest-plugin2', 'version' : '1.2'},
|
||||
]
|
||||
|
||||
def release_data(self, package_name, version):
|
||||
results = {
|
||||
('pytest-plugin1', '1.0') : {
|
||||
'package_url' : 'http://plugin1',
|
||||
'release_url' : 'http://plugin1/1.0',
|
||||
'author' : 'someone',
|
||||
'author_email' : 'someone@py.com',
|
||||
'summary' : 'some plugin',
|
||||
'downloads': {'last_day': 1, 'last_month': 4, 'last_week': 2},
|
||||
},
|
||||
|
||||
('pytest-plugin2', '1.2') : {
|
||||
'package_url' : 'http://plugin2',
|
||||
'release_url' : 'http://plugin2/1.2',
|
||||
'author' : 'other',
|
||||
'author_email' : 'other@py.com',
|
||||
'summary' : 'some other plugin',
|
||||
'downloads': {'last_day': 10, 'last_month': 40, 'last_week': 20},
|
||||
},
|
||||
}
|
||||
|
||||
return results[(package_name, version)]
|
||||
|
||||
|
||||
monkeypatch.setattr(xmlrpclib, 'ServerProxy', DummyProxy, 'foo')
|
||||
monkeypatch.setattr(plugins_index, '_get_today_as_str', lambda: '2013-10-20')
|
||||
|
||||
output_file = str(tmpdir.join('output.txt'))
|
||||
assert plugins_index.main(['', '-f', output_file, '-u', DummyProxy.expected_url]) == 0
|
||||
|
||||
with file(output_file, 'rU') as f:
|
||||
obtained_output = f.read()
|
||||
|
||||
if obtained_output != expected_output:
|
||||
obtained_file = os.path.splitext(__file__)[0] + '.obtained'
|
||||
with file(obtained_file, 'w') as f:
|
||||
f.write(obtained_output)
|
||||
|
||||
assert obtained_output == expected_output
|
||||
|
||||
|
||||
expected_output = '''\
|
||||
.. _plugins_index:
|
||||
|
||||
List of Third-Party Plugins
|
||||
===========================
|
||||
|
||||
============================================ ============================= ========= ============================================================================================= ============================================================================================= ===================
|
||||
Name Author Downloads Python 2.7 Python 3.3 Summary
|
||||
============================================ ============================= ========= ============================================================================================= ============================================================================================= ===================
|
||||
`pytest-plugin1-1.0 <http://plugin1/1.0>`_ `someone <someone@py.com>`_ 4 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-plugin1-1.0?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-plugin1-1.0?py=py33&pytest=2.4.2 some plugin
|
||||
`pytest-plugin2-1.2 <http://plugin2/1.2>`_ `other <other@py.com>`_ 40 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-plugin2-1.2?py=py27&pytest=2.4.2 .. image:: http://pytest-plugs.herokuapp.com/status/pytest-plugin2-1.2?py=py33&pytest=2.4.2 some other plugin
|
||||
|
||||
============================================ ============================= ========= ============================================================================================= ============================================================================================= ===================
|
||||
|
||||
*(Downloads are given from last month only)*
|
||||
|
||||
*(Updated on 2013-10-20)*
|
||||
'''
|
||||
|
||||
|
||||
#===================================================================================================
|
||||
# main
|
||||
#===================================================================================================
|
||||
if __name__ == '__main__':
|
||||
pytest.main()
|
||||
@@ -70,7 +70,8 @@ For larger test suites it's usually a good idea to have one file
|
||||
where you define the markers which you then consistently apply
|
||||
throughout your test suite.
|
||||
|
||||
Alternatively, the pre pytest-2.4 way to specify `condition strings <condition strings>`_ instead of booleans will remain fully supported in future
|
||||
Alternatively, the pre pytest-2.4 way to specify :ref:`condition strings
|
||||
<string conditions>` instead of booleans will remain fully supported in future
|
||||
versions of pytest. It couldn't be easily used for importing markers
|
||||
between test modules so it's no longer advertised as the primary method.
|
||||
|
||||
@@ -158,7 +159,7 @@ Running it with the report-on-xfail option gives this output::
|
||||
|
||||
example $ py.test -rx xfail_demo.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 6 items
|
||||
|
||||
xfail_demo.py xxxxxx
|
||||
@@ -175,7 +176,7 @@ Running it with the report-on-xfail option gives this output::
|
||||
XFAIL xfail_demo.py::test_hello6
|
||||
reason: reason
|
||||
|
||||
======================== 6 xfailed in 0.05 seconds =========================
|
||||
======================== 6 xfailed in 0.06 seconds =========================
|
||||
|
||||
.. _`skip/xfail with parametrize`:
|
||||
|
||||
@@ -232,7 +233,7 @@ The version will be read from the specified
|
||||
module's ``__version__`` attribute.
|
||||
|
||||
|
||||
.. _`string conditions`:
|
||||
.. _string conditions:
|
||||
|
||||
specifying conditions as strings versus booleans
|
||||
----------------------------------------------------------
|
||||
|
||||
@@ -10,7 +10,10 @@ Tutorial examples and blog postings
|
||||
.. _`tutorial1 repository`: http://bitbucket.org/hpk42/pytest-tutorial1/
|
||||
.. _`pycon 2010 tutorial PDF`: http://bitbucket.org/hpk42/pytest-tutorial1/raw/tip/pytest-basic.pdf
|
||||
|
||||
Basic usage and funcargs:
|
||||
Basic usage and fixtures:
|
||||
|
||||
- `pytest feature and release highlights (GERMAN, October 2013)
|
||||
<http://pyvideo.org/video/2429/pytest-feature-and-new-release-highlights>`_
|
||||
|
||||
- `pytest introduction from Brian Okken (January 2013)
|
||||
<http://pythontesting.net/framework/pytest-introduction/>`_
|
||||
|
||||
@@ -29,7 +29,7 @@ Running this would result in a passed test except for the last
|
||||
|
||||
$ py.test test_tmpdir.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 1 items
|
||||
|
||||
test_tmpdir.py F
|
||||
@@ -37,7 +37,7 @@ Running this would result in a passed test except for the last
|
||||
================================= FAILURES =================================
|
||||
_____________________________ test_create_file _____________________________
|
||||
|
||||
tmpdir = local('/tmp/pytest-187/test_create_file0')
|
||||
tmpdir = local('/tmp/pytest-278/test_create_file0')
|
||||
|
||||
def test_create_file(tmpdir):
|
||||
p = tmpdir.mkdir("sub").join("hello.txt")
|
||||
@@ -48,7 +48,7 @@ Running this would result in a passed test except for the last
|
||||
E assert 0
|
||||
|
||||
test_tmpdir.py:7: AssertionError
|
||||
========================= 1 failed in 0.01 seconds =========================
|
||||
========================= 1 failed in 0.02 seconds =========================
|
||||
|
||||
.. _`base temporary directory`:
|
||||
|
||||
|
||||
@@ -88,7 +88,7 @@ the ``self.db`` values in the traceback::
|
||||
|
||||
$ py.test test_unittest_db.py
|
||||
=========================== test session starts ============================
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.4.2
|
||||
platform linux2 -- Python 2.7.3 -- pytest-2.5.0
|
||||
collected 2 items
|
||||
|
||||
test_unittest_db.py FF
|
||||
@@ -101,7 +101,7 @@ the ``self.db`` values in the traceback::
|
||||
def test_method1(self):
|
||||
assert hasattr(self, "db")
|
||||
> assert 0, self.db # fail for demo purposes
|
||||
E AssertionError: <conftest.DummyDB instance at 0x27b2b00>
|
||||
E AssertionError: <conftest.DummyDB instance at 0x101b3b0>
|
||||
|
||||
test_unittest_db.py:9: AssertionError
|
||||
___________________________ MyTest.test_method2 ____________________________
|
||||
@@ -110,7 +110,7 @@ the ``self.db`` values in the traceback::
|
||||
|
||||
def test_method2(self):
|
||||
> assert 0, self.db # fail for demo purposes
|
||||
E AssertionError: <conftest.DummyDB instance at 0x27b2b00>
|
||||
E AssertionError: <conftest.DummyDB instance at 0x101b3b0>
|
||||
|
||||
test_unittest_db.py:12: AssertionError
|
||||
========================= 2 failed in 0.02 seconds =========================
|
||||
|
||||
74
extra/get_issues.py
Normal file
74
extra/get_issues.py
Normal file
@@ -0,0 +1,74 @@
|
||||
import json
|
||||
import py
|
||||
import textwrap
|
||||
|
||||
issues_url = "http://bitbucket.org/api/1.0/repositories/hpk42/pytest/issues"
|
||||
|
||||
import requests
|
||||
|
||||
def get_issues():
|
||||
chunksize = 50
|
||||
start = 0
|
||||
issues = []
|
||||
while 1:
|
||||
post_data = {"accountname": "hpk42",
|
||||
"repo_slug": "pytest",
|
||||
"start": start,
|
||||
"limit": chunksize}
|
||||
print ("getting from", start)
|
||||
r = requests.get(issues_url, params=post_data)
|
||||
data = r.json()
|
||||
issues.extend(data["issues"])
|
||||
if start + chunksize >= data["count"]:
|
||||
return issues
|
||||
start += chunksize
|
||||
|
||||
kind2num = "bug enhancement task proposal".split()
|
||||
|
||||
status2num = "new open resolved duplicate invalid wontfix".split()
|
||||
|
||||
def main(args):
|
||||
cachefile = py.path.local(args.cache)
|
||||
if not cachefile.exists() or args.refresh:
|
||||
issues = get_issues()
|
||||
cachefile.write(json.dumps(issues))
|
||||
else:
|
||||
issues = json.loads(cachefile.read())
|
||||
|
||||
open_issues = [x for x in issues
|
||||
if x["status"] in ("new", "open")]
|
||||
|
||||
def kind_and_id(x):
|
||||
kind = x["metadata"]["kind"]
|
||||
return kind2num.index(kind), len(issues)-int(x["local_id"])
|
||||
open_issues.sort(key=kind_and_id)
|
||||
report(open_issues)
|
||||
|
||||
def report(issues):
|
||||
for issue in issues:
|
||||
metadata = issue["metadata"]
|
||||
priority = issue["priority"]
|
||||
title = issue["title"]
|
||||
content = issue["content"]
|
||||
kind = metadata["kind"]
|
||||
status = issue["status"]
|
||||
id = issue["local_id"]
|
||||
link = "https://bitbucket.org/hpk42/pytest/issue/%s/" % id
|
||||
print("----")
|
||||
print(status, kind, link)
|
||||
print(title)
|
||||
#print()
|
||||
#lines = content.split("\n")
|
||||
#print ("\n".join(lines[:3]))
|
||||
#if len(lines) > 3 or len(content) > 240:
|
||||
# print ("...")
|
||||
|
||||
if __name__ == "__main__":
|
||||
import argparse
|
||||
parser = argparse.ArgumentParser("process bitbucket issues")
|
||||
parser.add_argument("--refresh", action="store_true",
|
||||
help="invalidate cache, refresh issues")
|
||||
parser.add_argument("--cache", action="store", default="issues.json",
|
||||
help="cache file")
|
||||
args = parser.parse_args()
|
||||
main(args)
|
||||
32
setup.py
32
setup.py
@@ -1,9 +1,23 @@
|
||||
import os, sys
|
||||
from setuptools import setup, Command
|
||||
|
||||
classifiers=['Development Status :: 6 - Mature',
|
||||
'Intended Audience :: Developers',
|
||||
'License :: OSI Approved :: MIT License',
|
||||
'Operating System :: POSIX',
|
||||
'Operating System :: Microsoft :: Windows',
|
||||
'Operating System :: MacOS :: MacOS X',
|
||||
'Topic :: Software Development :: Testing',
|
||||
'Topic :: Software Development :: Libraries',
|
||||
'Topic :: Utilities',
|
||||
'Programming Language :: Python :: 2',
|
||||
'Programming Language :: Python :: 3'] + [
|
||||
("Programming Language :: Python :: %s" % x) for x in
|
||||
"2.6 2.7 3.0 3.1 3.2 3.3".split()]
|
||||
|
||||
long_description = open("README.rst").read()
|
||||
def main():
|
||||
install_requires = ["py>=1.4.17"]
|
||||
install_requires = ["py>=1.4.19"]
|
||||
if sys.version_info < (2,7):
|
||||
install_requires.append("argparse")
|
||||
if sys.platform == "win32":
|
||||
@@ -13,29 +27,17 @@ def main():
|
||||
name='pytest',
|
||||
description='py.test: simple powerful testing with Python',
|
||||
long_description = long_description,
|
||||
version='2.4.2',
|
||||
version='2.5.0',
|
||||
url='http://pytest.org',
|
||||
license='MIT license',
|
||||
platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'],
|
||||
author='Holger Krekel, Benjamin Peterson, Ronny Pfannschmidt, Floris Bruynooghe and others',
|
||||
author_email='holger at merlinux.eu',
|
||||
entry_points= make_entry_points(),
|
||||
classifiers=classifiers,
|
||||
cmdclass = {'test': PyTest},
|
||||
# the following should be enabled for release
|
||||
install_requires=install_requires,
|
||||
classifiers=['Development Status :: 6 - Mature',
|
||||
'Intended Audience :: Developers',
|
||||
'License :: OSI Approved :: MIT License',
|
||||
'Operating System :: POSIX',
|
||||
'Operating System :: Microsoft :: Windows',
|
||||
'Operating System :: MacOS :: MacOS X',
|
||||
'Topic :: Software Development :: Testing',
|
||||
'Topic :: Software Development :: Libraries',
|
||||
'Topic :: Utilities',
|
||||
'Programming Language :: Python :: 2',
|
||||
'Programming Language :: Python :: 3'] + [
|
||||
("Programming Language :: Python :: %s" % x) for x in
|
||||
"2.4 2.5 2.6 2.7 3.0 3.1 3.2 3.3".split()],
|
||||
packages=['_pytest', '_pytest.assertion'],
|
||||
py_modules=['pytest'],
|
||||
zip_safe=False,
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import sys, py, pytest
|
||||
import py, pytest
|
||||
|
||||
class TestGeneralUsage:
|
||||
def test_config_error(self, testdir):
|
||||
@@ -14,7 +14,7 @@ class TestGeneralUsage:
|
||||
])
|
||||
|
||||
def test_root_conftest_syntax_error(self, testdir):
|
||||
p = testdir.makepyfile(conftest="raise SyntaxError\n")
|
||||
testdir.makepyfile(conftest="raise SyntaxError\n")
|
||||
result = testdir.runpytest()
|
||||
result.stderr.fnmatch_lines(["*raise SyntaxError*"])
|
||||
assert result.ret != 0
|
||||
@@ -67,7 +67,7 @@ class TestGeneralUsage:
|
||||
result = testdir.runpytest("-s", "asd")
|
||||
assert result.ret == 4 # EXIT_USAGEERROR
|
||||
result.stderr.fnmatch_lines(["ERROR: file not found*asd"])
|
||||
s = result.stdout.fnmatch_lines([
|
||||
result.stdout.fnmatch_lines([
|
||||
"*---configure",
|
||||
"*---unconfigure",
|
||||
])
|
||||
@@ -307,6 +307,24 @@ class TestGeneralUsage:
|
||||
'*ERROR*',
|
||||
])
|
||||
assert result.ret == 4 # usage error only if item not found
|
||||
|
||||
def test_namespace_import_doesnt_confuse_import_hook(self, testdir):
|
||||
# Ref #383. Python 3.3's namespace package messed with our import hooks
|
||||
# Importing a module that didn't exist, even if the ImportError was
|
||||
# gracefully handled, would make our test crash.
|
||||
testdir.mkdir('not_a_package')
|
||||
p = testdir.makepyfile("""
|
||||
try:
|
||||
from not_a_package import doesnt_exist
|
||||
except ImportError:
|
||||
# We handle the import error gracefully here
|
||||
pass
|
||||
|
||||
def test_whatever():
|
||||
pass
|
||||
""")
|
||||
res = testdir.runpytest(p.basename)
|
||||
assert res.ret == 0
|
||||
|
||||
|
||||
class TestInvocationVariants:
|
||||
@@ -539,7 +557,6 @@ class TestDurations:
|
||||
assert result.ret == 0
|
||||
for x in "123":
|
||||
for y in 'call',: #'setup', 'call', 'teardown':
|
||||
l = []
|
||||
for line in result.stdout.lines:
|
||||
if ("test_%s" % x) in line and y in line:
|
||||
break
|
||||
|
||||
@@ -12,10 +12,7 @@ def pytest_addoption(parser):
|
||||
help=("run FD checks if lsof is available"))
|
||||
|
||||
def pytest_configure(config):
|
||||
config.addinivalue_line("markers",
|
||||
"multi(arg=[value1,value2, ...]): call the test function "
|
||||
"multiple times with arg=value1, then with arg=value2, ... "
|
||||
)
|
||||
config._basedir = py.path.local()
|
||||
if config.getvalue("lsof"):
|
||||
try:
|
||||
out = py.process.cmdexec("lsof -p %d" % pid)
|
||||
@@ -46,30 +43,13 @@ def check_open_files(config):
|
||||
config._numfiles = len(lines2)
|
||||
raise AssertionError("\n".join(error))
|
||||
|
||||
@pytest.mark.tryfirst # XXX rather do item.addfinalizer
|
||||
def pytest_runtest_setup(item):
|
||||
item._oldir = py.path.local()
|
||||
|
||||
def pytest_runtest_teardown(item, __multicall__):
|
||||
item._oldir.chdir()
|
||||
item.config._basedir.chdir()
|
||||
if hasattr(item.config, '_numfiles'):
|
||||
x = __multicall__.execute()
|
||||
check_open_files(item.config)
|
||||
return x
|
||||
|
||||
def pytest_generate_tests(metafunc):
|
||||
multi = getattr(metafunc.function, 'multi', None)
|
||||
if multi is not None:
|
||||
assert len(multi.kwargs) == 1
|
||||
for name, l in multi.kwargs.items():
|
||||
for val in l:
|
||||
metafunc.addcall(funcargs={name: val})
|
||||
elif 'anypython' in metafunc.fixturenames:
|
||||
for name in ('python2.5', 'python2.6',
|
||||
'python2.7', 'python3.2', "python3.3",
|
||||
'pypy', 'jython'):
|
||||
metafunc.addcall(id=name, param=name)
|
||||
|
||||
# XXX copied from execnet's conftest.py - needs to be merged
|
||||
winpymap = {
|
||||
'python2.7': r'C:\Python27\python.exe',
|
||||
@@ -100,7 +80,10 @@ def getexecutable(name, cache={}):
|
||||
cache[name] = executable
|
||||
return executable
|
||||
|
||||
def pytest_funcarg__anypython(request):
|
||||
@pytest.fixture(params=['python2.5', 'python2.6',
|
||||
'python2.7', 'python3.2', "python3.3",
|
||||
'pypy', 'jython'])
|
||||
def anypython(request):
|
||||
name = request.param
|
||||
executable = getexecutable(name)
|
||||
if executable is None:
|
||||
|
||||
@@ -1,6 +1,4 @@
|
||||
import pytest, py, sys
|
||||
from _pytest import python as funcargs
|
||||
from _pytest.python import FixtureLookupError
|
||||
import pytest, py
|
||||
|
||||
class TestModule:
|
||||
def test_failing_import(self, testdir):
|
||||
@@ -32,7 +30,7 @@ class TestModule:
|
||||
|
||||
def test_module_considers_pluginmanager_at_import(self, testdir):
|
||||
modcol = testdir.getmodulecol("pytest_plugins='xasdlkj',")
|
||||
pytest.raises(ImportError, "modcol.obj")
|
||||
pytest.raises(ImportError, lambda: modcol.obj)
|
||||
|
||||
class TestClass:
|
||||
def test_class_with_init_skip_collect(self, testdir):
|
||||
@@ -289,22 +287,10 @@ class TestFunction:
|
||||
pass
|
||||
f1 = pytest.Function(name="name", parent=session, config=config,
|
||||
args=(1,), callobj=func1)
|
||||
assert f1 == f1
|
||||
f2 = pytest.Function(name="name",config=config,
|
||||
args=(1,), callobj=func2, parent=session)
|
||||
assert not f1 == f2
|
||||
callobj=func2, parent=session)
|
||||
assert f1 != f2
|
||||
f3 = pytest.Function(name="name", parent=session, config=config,
|
||||
args=(1,2), callobj=func2)
|
||||
assert not f3 == f2
|
||||
assert f3 != f2
|
||||
|
||||
assert not f3 == f1
|
||||
assert f3 != f1
|
||||
|
||||
f1_b = pytest.Function(name="name", parent=session, config=config,
|
||||
args=(1,), callobj=func1)
|
||||
assert f1 == f1_b
|
||||
assert not f1 != f1_b
|
||||
|
||||
def test_issue197_parametrize_emptyset(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
@@ -336,7 +322,7 @@ class TestFunction:
|
||||
def test_function(arg):
|
||||
assert arg.__class__.__name__ == "A"
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec = testdir.inline_run("--fulltrace")
|
||||
reprec.assertoutcome(passed=1)
|
||||
|
||||
def test_parametrize_with_non_hashable_values(self, testdir):
|
||||
@@ -357,6 +343,68 @@ class TestFunction:
|
||||
rec = testdir.inline_run()
|
||||
rec.assertoutcome(passed=2)
|
||||
|
||||
|
||||
def test_parametrize_with_non_hashable_values_indirect(self, testdir):
|
||||
"""Test parametrization with non-hashable values with indirect parametrization."""
|
||||
testdir.makepyfile("""
|
||||
archival_mapping = {
|
||||
'1.0': {'tag': '1.0'},
|
||||
'1.2.2a1': {'tag': 'release-1.2.2a1'},
|
||||
}
|
||||
|
||||
import pytest
|
||||
|
||||
@pytest.fixture
|
||||
def key(request):
|
||||
return request.param
|
||||
|
||||
@pytest.fixture
|
||||
def value(request):
|
||||
return request.param
|
||||
|
||||
@pytest.mark.parametrize('key value'.split(),
|
||||
archival_mapping.items(), indirect=True)
|
||||
def test_archival_to_version(key, value):
|
||||
assert key in archival_mapping
|
||||
assert value == archival_mapping[key]
|
||||
""")
|
||||
rec = testdir.inline_run()
|
||||
rec.assertoutcome(passed=2)
|
||||
|
||||
|
||||
def test_parametrize_overrides_fixture(self, testdir):
|
||||
"""Test parametrization when parameter overrides existing fixture with same name."""
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
@pytest.fixture
|
||||
def value():
|
||||
return 'value'
|
||||
|
||||
@pytest.mark.parametrize('value',
|
||||
['overrided'])
|
||||
def test_overrided_via_param(value):
|
||||
assert value == 'overrided'
|
||||
""")
|
||||
rec = testdir.inline_run()
|
||||
rec.assertoutcome(passed=1)
|
||||
|
||||
|
||||
def test_parametrize_with_mark(selfself, testdir):
|
||||
items = testdir.getitems("""
|
||||
import pytest
|
||||
@pytest.mark.foo
|
||||
@pytest.mark.parametrize('arg', [
|
||||
1,
|
||||
pytest.mark.bar(pytest.mark.baz(2))
|
||||
])
|
||||
def test_function(arg):
|
||||
pass
|
||||
""")
|
||||
keywords = [item.keywords for item in items]
|
||||
assert 'foo' in keywords[0] and 'bar' not in keywords[0] and 'baz' not in keywords[0]
|
||||
assert 'foo' in keywords[1] and 'bar' in keywords[1] and 'baz' in keywords[1]
|
||||
|
||||
def test_function_equality_with_callspec(self, testdir, tmpdir):
|
||||
items = testdir.getitems("""
|
||||
import pytest
|
||||
@@ -606,7 +654,7 @@ class TestReportInfo:
|
||||
return MyFunction(name, parent=collector)
|
||||
""")
|
||||
item = testdir.getitem("def test_func(): pass")
|
||||
runner = item.config.pluginmanager.getplugin("runner")
|
||||
item.config.pluginmanager.getplugin("runner")
|
||||
assert item.location == ("ABCDE", 42, "custom")
|
||||
|
||||
def test_func_reportinfo(self, testdir):
|
||||
@@ -696,7 +744,7 @@ def test_customized_python_discovery_functions(testdir):
|
||||
[pytest]
|
||||
python_functions=_test
|
||||
""")
|
||||
p = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
def _test_underscore():
|
||||
pass
|
||||
""")
|
||||
|
||||
@@ -2,7 +2,7 @@ import pytest, py, sys
|
||||
from _pytest import python as funcargs
|
||||
from _pytest.python import FixtureLookupError
|
||||
from _pytest.pytester import get_public_names
|
||||
|
||||
from textwrap import dedent
|
||||
|
||||
def test_getfuncargnames():
|
||||
def f(): pass
|
||||
@@ -247,7 +247,7 @@ class TestFillFixtures:
|
||||
assert result.ret == 0
|
||||
|
||||
def test_funcarg_lookup_error(self, testdir):
|
||||
p = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
def test_lookup_error(unknown):
|
||||
pass
|
||||
""")
|
||||
@@ -307,7 +307,7 @@ class TestRequestBasic:
|
||||
def pytest_funcarg__something(request):
|
||||
return 1
|
||||
""")
|
||||
item = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
def pytest_funcarg__something(request):
|
||||
return request.getfuncargvalue("something") + 1
|
||||
def test_func(something):
|
||||
@@ -473,7 +473,7 @@ class TestRequestBasic:
|
||||
assert l == ["module", "function", "class",
|
||||
"function", "method", "function"]
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec = testdir.inline_run("-v")
|
||||
reprec.assertoutcome(passed=3)
|
||||
|
||||
def test_fixtures_sub_subdir_normalize_sep(self, testdir):
|
||||
@@ -634,13 +634,13 @@ class TestRequestCachedSetup:
|
||||
l.append("setup")
|
||||
def teardown(val):
|
||||
l.append("teardown")
|
||||
ret1 = req1.cached_setup(setup, teardown, scope="function")
|
||||
req1.cached_setup(setup, teardown, scope="function")
|
||||
assert l == ['setup']
|
||||
# artificial call of finalizer
|
||||
setupstate = req1._pyfuncitem.session._setupstate
|
||||
setupstate._callfinalizers(item1)
|
||||
assert l == ["setup", "teardown"]
|
||||
ret2 = req1.cached_setup(setup, teardown, scope="function")
|
||||
req1.cached_setup(setup, teardown, scope="function")
|
||||
assert l == ["setup", "teardown", "setup"]
|
||||
setupstate._callfinalizers(item1)
|
||||
assert l == ["setup", "teardown", "setup", "teardown"]
|
||||
@@ -914,6 +914,34 @@ class TestFixtureUsages:
|
||||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(passed=1)
|
||||
|
||||
def test_fixture_parametrized_with_iterator(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
l = []
|
||||
def f():
|
||||
yield 1
|
||||
yield 2
|
||||
dec = pytest.fixture(scope="module", params=f())
|
||||
|
||||
@dec
|
||||
def arg(request):
|
||||
return request.param
|
||||
@dec
|
||||
def arg2(request):
|
||||
return request.param
|
||||
|
||||
def test_1(arg):
|
||||
l.append(arg)
|
||||
def test_2(arg2):
|
||||
l.append(arg2*10)
|
||||
""")
|
||||
reprec = testdir.inline_run("-v")
|
||||
reprec.assertoutcome(passed=4)
|
||||
l = reprec.getcalls("pytest_runtest_call")[0].item.module.l
|
||||
assert l == [1,2, 10,20]
|
||||
|
||||
|
||||
class TestFixtureManagerParseFactories:
|
||||
def pytest_funcarg__testdir(self, request):
|
||||
testdir = request.getfuncargvalue("testdir")
|
||||
@@ -1315,6 +1343,7 @@ class TestAutouseManagement:
|
||||
l.append("step2-%d" % item)
|
||||
|
||||
def test_finish():
|
||||
print (l)
|
||||
assert l == ["setup-1", "step1-1", "step2-1", "teardown-1",
|
||||
"setup-2", "step1-2", "step2-2", "teardown-2",]
|
||||
""")
|
||||
@@ -1461,7 +1490,7 @@ class TestFixtureMarker:
|
||||
'request.getfuncargvalue("arg")',
|
||||
'request.cached_setup(lambda: None, scope="function")',
|
||||
], ids=["getfuncargvalue", "cached_setup"])
|
||||
def test_scope_mismatch(self, testdir, method):
|
||||
def test_scope_mismatch_various(self, testdir, method):
|
||||
testdir.makeconftest("""
|
||||
import pytest
|
||||
finalized = []
|
||||
@@ -1609,7 +1638,7 @@ class TestFixtureMarker:
|
||||
""")
|
||||
|
||||
def test_class_ordering(self, testdir):
|
||||
p = testdir.makeconftest("""
|
||||
testdir.makeconftest("""
|
||||
import pytest
|
||||
|
||||
l = []
|
||||
@@ -1683,23 +1712,22 @@ class TestFixtureMarker:
|
||||
l.append("test3")
|
||||
def test_4(modarg, arg):
|
||||
l.append("test4")
|
||||
def test_5():
|
||||
assert len(l) == 12 * 3
|
||||
expected = [
|
||||
'create:1', 'test1', 'fin:1', 'create:2', 'test1',
|
||||
'fin:2', 'create:mod1', 'test2', 'create:1', 'test3',
|
||||
'fin:1', 'create:2', 'test3', 'fin:2', 'create:1',
|
||||
'test4', 'fin:1', 'create:2', 'test4', 'fin:2',
|
||||
'fin:mod1', 'create:mod2', 'test2', 'create:1', 'test3',
|
||||
'fin:1', 'create:2', 'test3', 'fin:2', 'create:1',
|
||||
'test4', 'fin:1', 'create:2', 'test4', 'fin:2',
|
||||
'fin:mod2']
|
||||
import pprint
|
||||
pprint.pprint(list(zip(l, expected)))
|
||||
assert l == expected
|
||||
""")
|
||||
reprec = testdir.inline_run("-v")
|
||||
reprec.assertoutcome(passed=12+1)
|
||||
reprec.assertoutcome(passed=12)
|
||||
l = reprec.getcalls("pytest_runtest_call")[0].item.module.l
|
||||
expected = [
|
||||
'create:1', 'test1', 'fin:1', 'create:2', 'test1',
|
||||
'fin:2', 'create:mod1', 'test2', 'create:1', 'test3',
|
||||
'fin:1', 'create:2', 'test3', 'fin:2', 'create:1',
|
||||
'test4', 'fin:1', 'create:2', 'test4', 'fin:2',
|
||||
'fin:mod1', 'create:mod2', 'test2', 'create:1', 'test3',
|
||||
'fin:1', 'create:2', 'test3', 'fin:2', 'create:1',
|
||||
'test4', 'fin:1', 'create:2', 'test4', 'fin:2',
|
||||
'fin:mod2']
|
||||
import pprint
|
||||
pprint.pprint(list(zip(l, expected)))
|
||||
assert l == expected
|
||||
|
||||
def test_parametrized_fixture_teardown_order(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
@@ -1738,35 +1766,100 @@ class TestFixtureMarker:
|
||||
""")
|
||||
assert "error" not in result.stdout.str()
|
||||
|
||||
def test_fixture_finalizer(self, testdir):
|
||||
testdir.makeconftest("""
|
||||
import pytest
|
||||
import sys
|
||||
|
||||
@pytest.fixture
|
||||
def browser(request):
|
||||
|
||||
def finalize():
|
||||
sys.stdout.write('Finalized')
|
||||
request.addfinalizer(finalize)
|
||||
return {}
|
||||
""")
|
||||
b = testdir.mkdir("subdir")
|
||||
b.join("test_overriden_fixture_finalizer.py").write(dedent("""
|
||||
import pytest
|
||||
@pytest.fixture
|
||||
def browser(browser):
|
||||
browser['visited'] = True
|
||||
return browser
|
||||
|
||||
def test_browser(browser):
|
||||
assert browser['visited'] is True
|
||||
"""))
|
||||
reprec = testdir.runpytest("-s")
|
||||
for test in ['test_browser']:
|
||||
reprec.stdout.fnmatch_lines('*Finalized*')
|
||||
|
||||
def test_class_scope_with_normal_tests(self, testdir):
|
||||
testpath = testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
class Box:
|
||||
value = 0
|
||||
|
||||
@pytest.fixture(scope='class')
|
||||
def a(request):
|
||||
Box.value += 1
|
||||
return Box.value
|
||||
|
||||
def test_a(a):
|
||||
assert a == 1
|
||||
|
||||
class Test1:
|
||||
def test_b(self, a):
|
||||
assert a == 2
|
||||
|
||||
class Test2:
|
||||
def test_c(self, a):
|
||||
assert a == 3""")
|
||||
reprec = testdir.inline_run(testpath)
|
||||
for test in ['test_a', 'test_b', 'test_c']:
|
||||
assert reprec.matchreport(test).passed
|
||||
|
||||
def test_request_is_clean(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
@pytest.fixture(params=[1, 2])
|
||||
def fix(request):
|
||||
request.addfinalizer(lambda: l.append(request.param))
|
||||
def test_fix(fix):
|
||||
pass
|
||||
""")
|
||||
reprec = testdir.inline_run("-s")
|
||||
l = reprec.getcalls("pytest_runtest_call")[0].item.module.l
|
||||
assert l == [1,2]
|
||||
|
||||
def test_parametrize_separated_lifecycle(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
l = []
|
||||
@pytest.fixture(scope="module", params=[1, 2])
|
||||
def arg(request):
|
||||
request.config.l = l # to access from outer
|
||||
x = request.param
|
||||
request.addfinalizer(lambda: l.append("fin%s" % x))
|
||||
return request.param
|
||||
|
||||
l = []
|
||||
def test_1(arg):
|
||||
l.append(arg)
|
||||
def test_2(arg):
|
||||
l.append(arg)
|
||||
""")
|
||||
reprec = testdir.inline_run("-v")
|
||||
reprec = testdir.inline_run("-vs")
|
||||
reprec.assertoutcome(passed=4)
|
||||
l = reprec.getcalls("pytest_configure")[0].config.l
|
||||
l = reprec.getcalls("pytest_runtest_call")[0].item.module.l
|
||||
import pprint
|
||||
pprint.pprint(l)
|
||||
assert len(l) == 6
|
||||
#assert len(l) == 6
|
||||
assert l[0] == l[1] == 1
|
||||
assert l[2] == "fin1"
|
||||
assert l[3] == l[4] == 2
|
||||
assert l[5] == "fin2"
|
||||
|
||||
|
||||
def test_parametrize_function_scoped_finalizers_called(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
@@ -1789,6 +1882,69 @@ class TestFixtureMarker:
|
||||
reprec = testdir.inline_run("-v")
|
||||
reprec.assertoutcome(passed=5)
|
||||
|
||||
|
||||
@pytest.mark.issue246
|
||||
@pytest.mark.parametrize("scope", ["session", "function", "module"])
|
||||
def test_finalizer_order_on_parametrization(self, scope, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
|
||||
@pytest.fixture(scope=%(scope)r, params=["1"])
|
||||
def fix1(request):
|
||||
return request.param
|
||||
|
||||
@pytest.fixture(scope=%(scope)r)
|
||||
def fix2(request, base):
|
||||
def cleanup_fix2():
|
||||
assert not l, "base should not have been finalized"
|
||||
request.addfinalizer(cleanup_fix2)
|
||||
|
||||
@pytest.fixture(scope=%(scope)r)
|
||||
def base(request, fix1):
|
||||
def cleanup_base():
|
||||
l.append("fin_base")
|
||||
print ("finalizing base")
|
||||
request.addfinalizer(cleanup_base)
|
||||
|
||||
def test_begin():
|
||||
pass
|
||||
def test_baz(base, fix2):
|
||||
pass
|
||||
def test_other():
|
||||
pass
|
||||
""" % {"scope": scope})
|
||||
reprec = testdir.inline_run("-lvs")
|
||||
reprec.assertoutcome(passed=3)
|
||||
|
||||
@pytest.mark.issue396
|
||||
def test_class_scope_parametrization_ordering(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
l = []
|
||||
@pytest.fixture(params=["John", "Doe"], scope="class")
|
||||
def human(request):
|
||||
request.addfinalizer(lambda: l.append("fin %s" % request.param))
|
||||
return request.param
|
||||
|
||||
class TestGreetings:
|
||||
def test_hello(self, human):
|
||||
l.append("test_hello")
|
||||
|
||||
class TestMetrics:
|
||||
def test_name(self, human):
|
||||
l.append("test_name")
|
||||
|
||||
def test_population(self, human):
|
||||
l.append("test_population")
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(passed=6)
|
||||
l = reprec.getcalls("pytest_runtest_call")[0].item.module.l
|
||||
assert l == ["test_hello", "fin John", "test_hello", "fin Doe",
|
||||
"test_name", "test_population", "fin John",
|
||||
"test_name", "test_population", "fin Doe"]
|
||||
|
||||
def test_parametrize_setup_function(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import pytest, py, sys
|
||||
import pytest
|
||||
from _pytest import runner
|
||||
from _pytest import python
|
||||
|
||||
class TestOEJSKITSpecials:
|
||||
def test_funcarg_non_pycollectobj(self, testdir): # rough jstests usage
|
||||
@@ -55,6 +56,20 @@ class TestOEJSKITSpecials:
|
||||
assert not clscol.funcargs
|
||||
|
||||
|
||||
def test_wrapped_getfslineno():
|
||||
def func():
|
||||
pass
|
||||
def wrap(f):
|
||||
func.__wrapped__ = f
|
||||
func.patchings = ["qwe"]
|
||||
return func
|
||||
@wrap
|
||||
def wrapped_func(x, y, z):
|
||||
pass
|
||||
fs, lineno = python.getfslineno(wrapped_func)
|
||||
fs2, lineno2 = python.getfslineno(wrap)
|
||||
assert lineno > lineno2, "getfslineno does not unwrap correctly"
|
||||
|
||||
class TestMockDecoration:
|
||||
def test_wrapped_getfuncargnames(self):
|
||||
from _pytest.python import getfuncargnames
|
||||
@@ -118,6 +133,32 @@ class TestMockDecoration:
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(passed=2)
|
||||
calls = reprec.getcalls("pytest_runtest_logreport")
|
||||
funcnames = [call.report.location[2] for call in calls
|
||||
if call.report.when == "call"]
|
||||
assert funcnames == ["T.test_hello", "test_someting"]
|
||||
|
||||
def test_mock_sorting(self, testdir):
|
||||
pytest.importorskip("mock", "1.0.1")
|
||||
testdir.makepyfile("""
|
||||
import os
|
||||
import mock
|
||||
|
||||
@mock.patch("os.path.abspath")
|
||||
def test_one(abspath):
|
||||
pass
|
||||
@mock.patch("os.path.abspath")
|
||||
def test_two(abspath):
|
||||
pass
|
||||
@mock.patch("os.path.abspath")
|
||||
def test_three(abspath):
|
||||
pass
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
calls = reprec.getreports("pytest_runtest_logreport")
|
||||
calls = [x for x in calls if x.when == "call"]
|
||||
names = [x.nodeid.split("::")[-1] for x in calls]
|
||||
assert names == ["test_one", "test_two", "test_three"]
|
||||
|
||||
|
||||
class TestReRunTests:
|
||||
@@ -150,3 +191,7 @@ class TestReRunTests:
|
||||
result.stdout.fnmatch_lines("""
|
||||
*2 passed*
|
||||
""")
|
||||
|
||||
def test_pytestconfig_is_session_scoped():
|
||||
from _pytest.python import pytestconfig
|
||||
assert pytestconfig._pytestfixturefunction.scope == "session"
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
|
||||
import pytest, py, sys
|
||||
import pytest, py
|
||||
from _pytest import python as funcargs
|
||||
from _pytest.python import FixtureLookupError
|
||||
|
||||
class TestMetafunc:
|
||||
def Metafunc(self, func):
|
||||
@@ -194,8 +193,8 @@ class TestMetafunc:
|
||||
metafunc.parametrize('y', [2])
|
||||
def pytest_funcarg__x(request):
|
||||
return request.param * 10
|
||||
def pytest_funcarg__y(request):
|
||||
return request.param
|
||||
#def pytest_funcarg__y(request):
|
||||
# return request.param
|
||||
|
||||
def test_simple(x,y):
|
||||
assert x in (10,20)
|
||||
@@ -593,6 +592,8 @@ class TestMetafuncFunctional:
|
||||
|
||||
def test_it(foo):
|
||||
pass
|
||||
def test_it2(foo):
|
||||
pass
|
||||
""")
|
||||
reprec = testdir.inline_run("--collect-only")
|
||||
assert not reprec.getcalls("pytest_internalerror")
|
||||
@@ -815,7 +816,7 @@ class TestMarkersWithParametrization:
|
||||
reprec.assertoutcome(passed=2, skipped=2)
|
||||
|
||||
|
||||
@pytest.mark.xfail(reason="issue 290")
|
||||
@pytest.mark.issue290
|
||||
def test_parametrize_ID_generation_string_int_works(self, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
@@ -29,7 +29,7 @@ class TestRaises:
|
||||
def test_raises_flip_builtin_AssertionError(self):
|
||||
# we replace AssertionError on python level
|
||||
# however c code might still raise the builtin one
|
||||
from _pytest.assertion.util import BuiltinAssertionError
|
||||
from _pytest.assertion.util import BuiltinAssertionError # noqa
|
||||
pytest.raises(AssertionError,"""
|
||||
raise BuiltinAssertionError
|
||||
""")
|
||||
|
||||
@@ -72,7 +72,7 @@ class FilesCompleter(object):
|
||||
# the following barfs with a syntax error on py2.5
|
||||
# @pytest.mark.skipif("sys.version_info < (2,6)")
|
||||
class TestArgComplete:
|
||||
@pytest.mark.skipif("sys.platform == 'win32'")
|
||||
@pytest.mark.skipif("sys.platform in ('win32', 'darwin')")
|
||||
@pytest.mark.skipif("sys.version_info < (2,6)")
|
||||
def test_compare_with_compgen(self):
|
||||
from _pytest._argcomplete import FastFilesCompleter
|
||||
@@ -81,7 +81,7 @@ class TestArgComplete:
|
||||
for x in '/ /d /data qqq'.split():
|
||||
assert equal_with_bash(x, ffc, fc, out=py.std.sys.stdout)
|
||||
|
||||
@pytest.mark.skipif("sys.platform == 'win32'")
|
||||
@pytest.mark.skipif("sys.platform in ('win32', 'darwin')")
|
||||
@pytest.mark.skipif("sys.version_info < (2,6)")
|
||||
def test_remove_dir_prefix(self):
|
||||
"""this is not compatible with compgen but it is with bash itself:
|
||||
|
||||
@@ -1,8 +1,9 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
import sys
|
||||
|
||||
import py, pytest
|
||||
import _pytest.assertion as plugin
|
||||
from _pytest.assertion import reinterpret, util
|
||||
from _pytest.assertion import reinterpret
|
||||
needsnewassert = pytest.mark.skipif("sys.version_info < (2,6)")
|
||||
|
||||
|
||||
@@ -176,6 +177,15 @@ class TestAssert_reprcompare:
|
||||
expl = ' '.join(callequal('foo', 'bar'))
|
||||
assert 'raised in repr()' not in expl
|
||||
|
||||
def test_unicode(self):
|
||||
left = py.builtin._totext('£€', 'utf-8')
|
||||
right = py.builtin._totext('£', 'utf-8')
|
||||
expl = callequal(left, right)
|
||||
assert expl[0] == py.builtin._totext("'£€' == '£'", 'utf-8')
|
||||
assert expl[1] == py.builtin._totext('- £€', 'utf-8')
|
||||
assert expl[2] == py.builtin._totext('+ £', 'utf-8')
|
||||
|
||||
|
||||
def test_python25_compile_issue257(testdir):
|
||||
testdir.makepyfile("""
|
||||
def test_rewritten():
|
||||
@@ -353,7 +363,7 @@ def test_traceback_failure(testdir):
|
||||
|
||||
@pytest.mark.skipif("sys.version_info < (2,5) or '__pypy__' in sys.builtin_module_names or sys.platform.startswith('java')" )
|
||||
def test_warn_missing(testdir):
|
||||
p1 = testdir.makepyfile("")
|
||||
testdir.makepyfile("")
|
||||
result = testdir.run(sys.executable, "-OO", "-m", "pytest", "-h")
|
||||
result.stderr.fnmatch_lines([
|
||||
"*WARNING*assert statements are not executed*",
|
||||
@@ -376,3 +386,16 @@ def test_recursion_source_decode(testdir):
|
||||
result.stdout.fnmatch_lines("""
|
||||
<Module*>
|
||||
""")
|
||||
|
||||
def test_AssertionError_message(testdir):
|
||||
testdir.makepyfile("""
|
||||
def test_hello():
|
||||
x,y = 1,2
|
||||
assert 0, (x,y)
|
||||
""")
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines("""
|
||||
*def test_hello*
|
||||
*assert 0, (x,y)*
|
||||
*AssertionError: (1, 2)*
|
||||
""")
|
||||
|
||||
@@ -107,13 +107,13 @@ class TestAssertionRewrite:
|
||||
assert f
|
||||
assert getmsg(f) == "assert False"
|
||||
def f():
|
||||
assert a_global
|
||||
assert a_global # noqa
|
||||
assert getmsg(f, {"a_global" : False}) == "assert False"
|
||||
def f():
|
||||
assert sys == 42
|
||||
assert getmsg(f, {"sys" : sys}) == "assert sys == 42"
|
||||
def f():
|
||||
assert cls == 42
|
||||
assert cls == 42 # noqa
|
||||
class X(object):
|
||||
pass
|
||||
assert getmsg(f, {"cls" : X}) == "assert cls == 42"
|
||||
@@ -174,7 +174,7 @@ class TestAssertionRewrite:
|
||||
|
||||
def test_short_circut_evaluation(self):
|
||||
def f():
|
||||
assert True or explode
|
||||
assert True or explode # noqa
|
||||
getmsg(f, must_pass=True)
|
||||
def f():
|
||||
x = 1
|
||||
@@ -206,7 +206,6 @@ class TestAssertionRewrite:
|
||||
assert x + y
|
||||
assert getmsg(f) == "assert (1 + -1)"
|
||||
def f():
|
||||
x = range(10)
|
||||
assert not 5 % 4
|
||||
assert getmsg(f) == "assert not (5 % 4)"
|
||||
|
||||
@@ -243,12 +242,12 @@ class TestAssertionRewrite:
|
||||
g = 3
|
||||
ns = {"x" : X}
|
||||
def f():
|
||||
assert not x.g
|
||||
assert not x.g # noqa
|
||||
assert getmsg(f, ns) == """assert not 3
|
||||
+ where 3 = x.g"""
|
||||
def f():
|
||||
x.a = False
|
||||
assert x.a
|
||||
x.a = False # noqa
|
||||
assert x.a # noqa
|
||||
assert getmsg(f, ns) == """assert x.a"""
|
||||
|
||||
def test_comparisons(self):
|
||||
@@ -435,21 +434,46 @@ class TestAssertionRewriteHookDetails(object):
|
||||
def test_missing():
|
||||
assert not __loader__.is_package('pytest_not_there')
|
||||
""")
|
||||
pkg = testdir.mkpydir('fun')
|
||||
testdir.mkpydir('fun')
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines([
|
||||
'* 3 passed*',
|
||||
])
|
||||
|
||||
|
||||
@pytest.mark.skipif("sys.version_info[0] >= 3")
|
||||
@pytest.mark.xfail("hasattr(sys, 'pypy_translation_info')")
|
||||
def test_assume_ascii(self, testdir):
|
||||
content = "u'\xe2\x99\xa5'"
|
||||
content = "u'\xe2\x99\xa5\x01\xfe'"
|
||||
testdir.tmpdir.join("test_encoding.py").write(content, "wb")
|
||||
res = testdir.runpytest()
|
||||
assert res.ret != 0
|
||||
assert "SyntaxError: Non-ASCII character" in res.stdout.str()
|
||||
|
||||
@pytest.mark.skipif("sys.version_info[0] >= 3")
|
||||
def test_detect_coding_cookie(self, testdir):
|
||||
testdir.tmpdir.join("test_cookie.py").write("""# -*- coding: utf-8 -*-
|
||||
u"St\xc3\xa4d"
|
||||
def test_rewritten():
|
||||
assert "@py_builtins" in globals()""", "wb")
|
||||
assert testdir.runpytest().ret == 0
|
||||
|
||||
@pytest.mark.skipif("sys.version_info[0] >= 3")
|
||||
def test_detect_coding_cookie_second_line(self, testdir):
|
||||
testdir.tmpdir.join("test_cookie.py").write("""#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
u"St\xc3\xa4d"
|
||||
def test_rewritten():
|
||||
assert "@py_builtins" in globals()""", "wb")
|
||||
assert testdir.runpytest().ret == 0
|
||||
|
||||
@pytest.mark.skipif("sys.version_info[0] >= 3")
|
||||
def test_detect_coding_cookie_crlf(self, testdir):
|
||||
testdir.tmpdir.join("test_cookie.py").write("""#!/usr/bin/env python
|
||||
# -*- coding: utf-8 -*-
|
||||
u"St\xc3\xa4d"
|
||||
def test_rewritten():
|
||||
assert "@py_builtins" in globals()""".replace("\n", "\r\n"), "wb")
|
||||
assert testdir.runpytest().ret == 0
|
||||
|
||||
def test_write_pyc(self, testdir, tmpdir, monkeypatch):
|
||||
from _pytest.assertion.rewrite import _write_pyc
|
||||
@@ -469,3 +493,35 @@ class TestAssertionRewriteHookDetails(object):
|
||||
raise e
|
||||
monkeypatch.setattr(b, "open", open)
|
||||
assert not _write_pyc(state, [1], source_path, pycpath)
|
||||
|
||||
def test_resources_provider_for_loader(self, testdir):
|
||||
"""
|
||||
Attempts to load resources from a package should succeed normally,
|
||||
even when the AssertionRewriteHook is used to load the modules.
|
||||
|
||||
See #366 for details.
|
||||
"""
|
||||
pytest.importorskip("pkg_resources")
|
||||
|
||||
testdir.mkpydir('testpkg')
|
||||
contents = {
|
||||
'testpkg/test_pkg': """
|
||||
import pkg_resources
|
||||
|
||||
import pytest
|
||||
from _pytest.assertion.rewrite import AssertionRewritingHook
|
||||
|
||||
def test_load_resource():
|
||||
assert isinstance(__loader__, AssertionRewritingHook)
|
||||
res = pkg_resources.resource_string(__name__, 'resource.txt')
|
||||
res = res.decode('ascii')
|
||||
assert res == 'Load me please.'
|
||||
""",
|
||||
}
|
||||
testdir.makepyfile(**contents)
|
||||
testdir.maketxtfile(**{'testpkg/resource': "Load me please."})
|
||||
|
||||
result = testdir.runpytest()
|
||||
result.stdout.fnmatch_lines([
|
||||
'* 1 passed*',
|
||||
])
|
||||
|
||||
@@ -32,7 +32,7 @@ class TestCaptureManager:
|
||||
assert capman._getmethod(config, sub.join("test_hello.py")) == mode
|
||||
|
||||
@needsosdup
|
||||
@pytest.mark.multi(method=['no', 'fd', 'sys'])
|
||||
@pytest.mark.parametrize("method", ['no', 'fd', 'sys'])
|
||||
def test_capturing_basic_api(self, method):
|
||||
capouter = py.io.StdCaptureFD()
|
||||
old = sys.stdout, sys.stderr, sys.stdin
|
||||
@@ -81,7 +81,7 @@ class TestCaptureManager:
|
||||
capouter.reset()
|
||||
|
||||
@pytest.mark.xfail("hasattr(sys, 'pypy_version_info')")
|
||||
@pytest.mark.multi(method=['fd', 'sys'])
|
||||
@pytest.mark.parametrize("method", ['fd', 'sys'])
|
||||
def test_capturing_unicode(testdir, method):
|
||||
if sys.version_info >= (3,0):
|
||||
obj = "'b\u00f6y'"
|
||||
@@ -100,7 +100,7 @@ def test_capturing_unicode(testdir, method):
|
||||
"*1 passed*"
|
||||
])
|
||||
|
||||
@pytest.mark.multi(method=['fd', 'sys'])
|
||||
@pytest.mark.parametrize("method", ['fd', 'sys'])
|
||||
def test_capturing_bytes_in_utf8_encoding(testdir, method):
|
||||
testdir.makepyfile("""
|
||||
def test_unicode():
|
||||
|
||||
@@ -242,7 +242,7 @@ class TestCustomConftests:
|
||||
assert "passed" in result.stdout.str()
|
||||
|
||||
def test_pytest_fs_collect_hooks_are_seen(self, testdir):
|
||||
conf = testdir.makeconftest("""
|
||||
testdir.makeconftest("""
|
||||
import pytest
|
||||
class MyModule(pytest.Module):
|
||||
pass
|
||||
@@ -250,8 +250,8 @@ class TestCustomConftests:
|
||||
if path.ext == ".py":
|
||||
return MyModule(path, parent)
|
||||
""")
|
||||
sub = testdir.mkdir("sub")
|
||||
p = testdir.makepyfile("def test_x(): pass")
|
||||
testdir.mkdir("sub")
|
||||
testdir.makepyfile("def test_x(): pass")
|
||||
result = testdir.runpytest("--collect-only")
|
||||
result.stdout.fnmatch_lines([
|
||||
"*MyModule*",
|
||||
@@ -318,7 +318,7 @@ class TestSession:
|
||||
topdir = testdir.tmpdir
|
||||
rcol = Session(config)
|
||||
assert topdir == rcol.fspath
|
||||
rootid = rcol.nodeid
|
||||
#rootid = rcol.nodeid
|
||||
#root2 = rcol.perform_collect([rcol.nodeid], genitems=False)[0]
|
||||
#assert root2 == rcol, rootid
|
||||
colitems = rcol.perform_collect([rcol.nodeid], genitems=False)
|
||||
@@ -329,13 +329,13 @@ class TestSession:
|
||||
def test_collect_protocol_single_function(self, testdir):
|
||||
p = testdir.makepyfile("def test_func(): pass")
|
||||
id = "::".join([p.basename, "test_func"])
|
||||
topdir = testdir.tmpdir
|
||||
items, hookrec = testdir.inline_genitems(id)
|
||||
item, = items
|
||||
assert item.name == "test_func"
|
||||
newid = item.nodeid
|
||||
assert newid == id
|
||||
py.std.pprint.pprint(hookrec.hookrecorder.calls)
|
||||
topdir = testdir.tmpdir # noqa
|
||||
hookrec.hookrecorder.contains([
|
||||
("pytest_collectstart", "collector.fspath == topdir"),
|
||||
("pytest_make_collect_report", "collector.fspath == topdir"),
|
||||
@@ -436,7 +436,7 @@ class TestSession:
|
||||
])
|
||||
|
||||
def test_serialization_byid(self, testdir):
|
||||
p = testdir.makepyfile("def test_func(): pass")
|
||||
testdir.makepyfile("def test_func(): pass")
|
||||
items, hookrec = testdir.inline_genitems()
|
||||
assert len(items) == 1
|
||||
item, = items
|
||||
|
||||
@@ -16,7 +16,7 @@ class TestParseIni:
|
||||
assert config.inicfg['name'] == 'value'
|
||||
|
||||
def test_getcfg_empty_path(self, tmpdir):
|
||||
cfg = getcfg([''], ['setup.cfg']) #happens on py.test ""
|
||||
getcfg([''], ['setup.cfg']) #happens on py.test ""
|
||||
|
||||
def test_append_parse_args(self, testdir, tmpdir):
|
||||
tmpdir.join("setup.cfg").write(py.code.Source("""
|
||||
@@ -31,7 +31,7 @@ class TestParseIni:
|
||||
#assert len(args) == 1
|
||||
|
||||
def test_tox_ini_wrong_version(self, testdir):
|
||||
p = testdir.makefile('.ini', tox="""
|
||||
testdir.makefile('.ini', tox="""
|
||||
[pytest]
|
||||
minversion=9.0
|
||||
""")
|
||||
@@ -41,7 +41,7 @@ class TestParseIni:
|
||||
"*tox.ini:2*requires*9.0*actual*"
|
||||
])
|
||||
|
||||
@pytest.mark.multi(name="setup.cfg tox.ini pytest.ini".split())
|
||||
@pytest.mark.parametrize("name", "setup.cfg tox.ini pytest.ini".split())
|
||||
def test_ini_names(self, testdir, name):
|
||||
testdir.tmpdir.join(name).write(py.std.textwrap.dedent("""
|
||||
[pytest]
|
||||
@@ -77,7 +77,7 @@ class TestParseIni:
|
||||
class TestConfigCmdlineParsing:
|
||||
def test_parsing_again_fails(self, testdir):
|
||||
config = testdir.parseconfig()
|
||||
pytest.raises(AssertionError, "config.parse([])")
|
||||
pytest.raises(AssertionError, lambda: config.parse([]))
|
||||
|
||||
|
||||
class TestConfigAPI:
|
||||
@@ -200,7 +200,7 @@ class TestConfigAPI:
|
||||
parser.addini("args", "new args", type="args")
|
||||
parser.addini("a2", "", "args", default="1 2 3".split())
|
||||
""")
|
||||
p = testdir.makeini("""
|
||||
testdir.makeini("""
|
||||
[pytest]
|
||||
args=123 "123 hello" "this"
|
||||
""")
|
||||
@@ -217,7 +217,7 @@ class TestConfigAPI:
|
||||
parser.addini("xy", "", type="linelist")
|
||||
parser.addini("a2", "", "linelist")
|
||||
""")
|
||||
p = testdir.makeini("""
|
||||
testdir.makeini("""
|
||||
[pytest]
|
||||
xy= 123 345
|
||||
second line
|
||||
@@ -234,7 +234,7 @@ class TestConfigAPI:
|
||||
def pytest_addoption(parser):
|
||||
parser.addini("xy", "", type="linelist")
|
||||
""")
|
||||
p = testdir.makeini("""
|
||||
testdir.makeini("""
|
||||
[pytest]
|
||||
xy= 123
|
||||
""")
|
||||
|
||||
@@ -8,7 +8,7 @@ def pytest_generate_tests(metafunc):
|
||||
|
||||
def pytest_funcarg__basedir(request):
|
||||
def basedirmaker(request):
|
||||
basedir = d = request.getfuncargvalue("tmpdir")
|
||||
d = request.getfuncargvalue("tmpdir")
|
||||
d.ensure("adir/conftest.py").write("a=1 ; Directory = 3")
|
||||
d.ensure("adir/b/conftest.py").write("b=2 ; a = 1.5")
|
||||
if request.param == "inpackage":
|
||||
@@ -41,7 +41,7 @@ class TestConftestValueAccessGlobal:
|
||||
|
||||
def test_immediate_initialiation_and_incremental_are_the_same(self, basedir):
|
||||
conftest = Conftest()
|
||||
snap0 = len(conftest._path2confmods)
|
||||
len(conftest._path2confmods)
|
||||
conftest.getconftestmodules(basedir)
|
||||
snap1 = len(conftest._path2confmods)
|
||||
#assert len(conftest._path2confmods) == snap1 + 1
|
||||
@@ -57,7 +57,7 @@ class TestConftestValueAccessGlobal:
|
||||
|
||||
def test_value_access_not_existing(self, basedir):
|
||||
conftest = ConftestWithSetinitial(basedir)
|
||||
pytest.raises(KeyError, "conftest.rget('a')")
|
||||
pytest.raises(KeyError, lambda: conftest.rget('a'))
|
||||
#pytest.raises(KeyError, "conftest.lget('a')")
|
||||
|
||||
def test_value_access_by_path(self, basedir):
|
||||
@@ -97,7 +97,7 @@ def test_conftest_in_nonpkg_with_init(tmpdir):
|
||||
tmpdir.ensure("adir-1.0/b/conftest.py").write("b=2 ; a = 1.5")
|
||||
tmpdir.ensure("adir-1.0/b/__init__.py")
|
||||
tmpdir.ensure("adir-1.0/__init__.py")
|
||||
conftest = ConftestWithSetinitial(tmpdir.join("adir-1.0", "b"))
|
||||
ConftestWithSetinitial(tmpdir.join("adir-1.0", "b"))
|
||||
|
||||
def test_doubledash_not_considered(testdir):
|
||||
conf = testdir.mkdir("--option")
|
||||
@@ -182,7 +182,7 @@ def test_setinitial_confcut(testdir):
|
||||
assert conftest.getconftestmodules(sub) == []
|
||||
assert conftest.getconftestmodules(conf.dirpath()) == []
|
||||
|
||||
@pytest.mark.multi(name='test tests whatever .dotdir'.split())
|
||||
@pytest.mark.parametrize("name", 'test tests whatever .dotdir'.split())
|
||||
def test_setinitial_conftest_subdirs(testdir, name):
|
||||
sub = testdir.mkdir(name)
|
||||
subconftest = sub.ensure("conftest.py")
|
||||
@@ -215,3 +215,40 @@ def test_conftest_import_order(testdir, monkeypatch):
|
||||
conftest = Conftest()
|
||||
monkeypatch.setattr(conftest, 'importconftest', impct)
|
||||
assert conftest.getconftestmodules(sub) == [ct1, ct2]
|
||||
|
||||
|
||||
def test_fixture_dependency(testdir, monkeypatch):
|
||||
ct1 = testdir.makeconftest("")
|
||||
ct1 = testdir.makepyfile("__init__.py")
|
||||
ct1.write("")
|
||||
sub = testdir.mkdir("sub")
|
||||
sub.join("__init__.py").write("")
|
||||
sub.join("conftest.py").write(py.std.textwrap.dedent("""
|
||||
import pytest
|
||||
|
||||
@pytest.fixture
|
||||
def not_needed():
|
||||
assert False, "Should not be called!"
|
||||
|
||||
@pytest.fixture
|
||||
def foo():
|
||||
assert False, "Should not be called!"
|
||||
|
||||
@pytest.fixture
|
||||
def bar(foo):
|
||||
return 'bar'
|
||||
"""))
|
||||
subsub = sub.mkdir("subsub")
|
||||
subsub.join("__init__.py").write("")
|
||||
subsub.join("test_bar.py").write(py.std.textwrap.dedent("""
|
||||
import pytest
|
||||
|
||||
@pytest.fixture
|
||||
def bar():
|
||||
return 'sub bar'
|
||||
|
||||
def test_event_fixture(bar):
|
||||
assert bar == 'sub bar'
|
||||
"""))
|
||||
result = testdir.runpytest("sub")
|
||||
result.stdout.fnmatch_lines(["*1 passed*"])
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
import pytest, py, os
|
||||
from _pytest.core import PluginManager
|
||||
from _pytest.core import MultiCall, HookRelay, varnames
|
||||
from _pytest.core import * # noqa
|
||||
from _pytest.config import get_plugin_manager
|
||||
|
||||
|
||||
@@ -8,13 +7,12 @@ class TestBootstrapping:
|
||||
def test_consider_env_fails_to_import(self, monkeypatch):
|
||||
pluginmanager = PluginManager()
|
||||
monkeypatch.setenv('PYTEST_PLUGINS', 'nonexisting', prepend=",")
|
||||
pytest.raises(ImportError, "pluginmanager.consider_env()")
|
||||
pytest.raises(ImportError, lambda: pluginmanager.consider_env())
|
||||
|
||||
def test_preparse_args(self):
|
||||
pluginmanager = PluginManager()
|
||||
pytest.raises(ImportError, """
|
||||
pluginmanager.consider_preparse(["xyz", "-p", "hello123"])
|
||||
""")
|
||||
pytest.raises(ImportError, lambda:
|
||||
pluginmanager.consider_preparse(["xyz", "-p", "hello123"]))
|
||||
|
||||
def test_plugin_prevent_register(self):
|
||||
pluginmanager = PluginManager()
|
||||
@@ -93,7 +91,7 @@ class TestBootstrapping:
|
||||
# ok, we did not explode
|
||||
|
||||
def test_pluginmanager_ENV_startup(self, testdir, monkeypatch):
|
||||
x500 = testdir.makepyfile(pytest_x500="#")
|
||||
testdir.makepyfile(pytest_x500="#")
|
||||
p = testdir.makepyfile("""
|
||||
import pytest
|
||||
def test_hello(pytestconfig):
|
||||
@@ -110,7 +108,7 @@ class TestBootstrapping:
|
||||
pytest.raises(ImportError, 'pluginmanager.import_plugin("qweqwex.y")')
|
||||
pytest.raises(ImportError, 'pluginmanager.import_plugin("pytest_qweqwx.y")')
|
||||
|
||||
reset = testdir.syspathinsert()
|
||||
testdir.syspathinsert()
|
||||
pluginname = "pytest_hello"
|
||||
testdir.makepyfile(**{pluginname: ""})
|
||||
pluginmanager.import_plugin("pytest_hello")
|
||||
@@ -128,7 +126,7 @@ class TestBootstrapping:
|
||||
pytest.raises(ImportError, 'pluginmanager.import_plugin("qweqwex.y")')
|
||||
pytest.raises(ImportError, 'pluginmanager.import_plugin("pytest_qweqwex.y")')
|
||||
|
||||
reset = testdir.syspathinsert()
|
||||
testdir.syspathinsert()
|
||||
testdir.mkpydir("pkg").join("plug.py").write("x=3")
|
||||
pluginname = "pkg.plug"
|
||||
pluginmanager.import_plugin(pluginname)
|
||||
@@ -170,7 +168,7 @@ class TestBootstrapping:
|
||||
def test_consider_conftest_deps(self, testdir):
|
||||
mod = testdir.makepyfile("pytest_plugins='xyz'").pyimport()
|
||||
pp = PluginManager()
|
||||
pytest.raises(ImportError, "pp.consider_conftest(mod)")
|
||||
pytest.raises(ImportError, lambda: pp.consider_conftest(mod))
|
||||
|
||||
def test_pm(self):
|
||||
pp = PluginManager()
|
||||
@@ -210,9 +208,7 @@ class TestBootstrapping:
|
||||
l = pp.getplugins()
|
||||
assert mod in l
|
||||
pytest.raises(ValueError, "pp.register(mod)")
|
||||
mod2 = py.std.types.ModuleType("pytest_hello")
|
||||
#pp.register(mod2) # double pm
|
||||
pytest.raises(ValueError, "pp.register(mod)")
|
||||
pytest.raises(ValueError, lambda: pp.register(mod))
|
||||
#assert not pp.isregistered(mod2)
|
||||
assert pp.getplugins() == l
|
||||
|
||||
@@ -229,14 +225,14 @@ class TestBootstrapping:
|
||||
class hello:
|
||||
def pytest_gurgel(self):
|
||||
pass
|
||||
pytest.raises(Exception, "pp.register(hello())")
|
||||
pytest.raises(Exception, lambda: pp.register(hello()))
|
||||
|
||||
def test_register_mismatch_arg(self):
|
||||
pp = get_plugin_manager()
|
||||
class hello:
|
||||
def pytest_configure(self, asd):
|
||||
pass
|
||||
excinfo = pytest.raises(Exception, "pp.register(hello())")
|
||||
pytest.raises(Exception, lambda: pp.register(hello()))
|
||||
|
||||
def test_register(self):
|
||||
pm = get_plugin_manager()
|
||||
@@ -293,7 +289,7 @@ class TestBootstrapping:
|
||||
class TestPytestPluginInteractions:
|
||||
|
||||
def test_addhooks_conftestplugin(self, testdir):
|
||||
newhooks = testdir.makepyfile(newhooks="""
|
||||
testdir.makepyfile(newhooks="""
|
||||
def pytest_myhook(xyz):
|
||||
"new hook"
|
||||
""")
|
||||
@@ -312,7 +308,7 @@ class TestPytestPluginInteractions:
|
||||
assert res == [11]
|
||||
|
||||
def test_addhooks_nohooks(self, testdir):
|
||||
conf = testdir.makeconftest("""
|
||||
testdir.makeconftest("""
|
||||
import sys
|
||||
def pytest_addhooks(pluginmanager):
|
||||
pluginmanager.addhooks(sys)
|
||||
@@ -344,10 +340,8 @@ class TestPytestPluginInteractions:
|
||||
assert hello == "world"
|
||||
assert 'hello' in py.test.__all__
|
||||
""")
|
||||
result = testdir.runpytest(p)
|
||||
result.stdout.fnmatch_lines([
|
||||
"*1 passed*"
|
||||
])
|
||||
reprec = testdir.inline_run(p)
|
||||
reprec.assertoutcome(passed=1)
|
||||
|
||||
def test_do_option_postinitialize(self, testdir):
|
||||
config = testdir.parseconfigure()
|
||||
@@ -431,7 +425,7 @@ def test_namespace_has_default_and_env_plugins(testdir):
|
||||
|
||||
def test_varnames():
|
||||
def f(x):
|
||||
i = 3
|
||||
i = 3 # noqa
|
||||
class A:
|
||||
def f(self, y):
|
||||
pass
|
||||
@@ -442,6 +436,15 @@ def test_varnames():
|
||||
assert varnames(A().f) == ('y',)
|
||||
assert varnames(B()) == ('z',)
|
||||
|
||||
def test_varnames_class():
|
||||
class C:
|
||||
def __init__(self, x):
|
||||
pass
|
||||
class D:
|
||||
pass
|
||||
assert varnames(C) == ("x",)
|
||||
assert varnames(D) == ()
|
||||
|
||||
class TestMultiCall:
|
||||
def test_uses_copy_of_methods(self):
|
||||
l = [lambda: 42]
|
||||
@@ -496,7 +499,7 @@ class TestMultiCall:
|
||||
|
||||
def test_tags_call_error(self):
|
||||
multicall = MultiCall([lambda x: x], {})
|
||||
pytest.raises(TypeError, "multicall.execute()")
|
||||
pytest.raises(TypeError, multicall.execute)
|
||||
|
||||
def test_call_subexecute(self):
|
||||
def m(__multicall__):
|
||||
@@ -544,7 +547,7 @@ class TestHookRelay:
|
||||
def hello(self, arg):
|
||||
"api hook 1"
|
||||
mcm = HookRelay(hookspecs=Api, pm=pm, prefix="he")
|
||||
pytest.raises(TypeError, "mcm.hello(3)")
|
||||
pytest.raises(TypeError, lambda: mcm.hello(3))
|
||||
|
||||
def test_firstresult_definition(self):
|
||||
pm = PluginManager()
|
||||
@@ -655,3 +658,10 @@ def test_default_markers(testdir):
|
||||
"*tryfirst*first*",
|
||||
"*trylast*last*",
|
||||
])
|
||||
|
||||
def test_importplugin_issue375(testdir):
|
||||
testdir.makepyfile(qwe="import aaaa")
|
||||
excinfo = pytest.raises(ImportError, lambda: importplugin("qwe"))
|
||||
assert "qwe" not in str(excinfo.value)
|
||||
assert "aaaa" in str(excinfo.value)
|
||||
|
||||
|
||||
@@ -99,7 +99,7 @@ class TestDoctests:
|
||||
reprec.assertoutcome(failed=1)
|
||||
|
||||
def test_doctest_unexpected_exception(self, testdir):
|
||||
p = testdir.maketxtfile("""
|
||||
testdir.maketxtfile("""
|
||||
>>> i = 0
|
||||
>>> 0 / i
|
||||
2
|
||||
@@ -136,7 +136,7 @@ class TestDoctests:
|
||||
testdir.tmpdir.join("hello.py").write(py.code.Source("""
|
||||
import asdalsdkjaslkdjasd
|
||||
"""))
|
||||
p = testdir.maketxtfile("""
|
||||
testdir.maketxtfile("""
|
||||
>>> import hello
|
||||
>>>
|
||||
""")
|
||||
@@ -209,6 +209,26 @@ class TestDoctests:
|
||||
reprec = testdir.inline_run(p, )
|
||||
reprec.assertoutcome(passed=1)
|
||||
|
||||
@xfail_if_pdbpp_installed
|
||||
def test_txtfile_with_usefixtures_in_ini(self, testdir):
|
||||
testdir.makeini("""
|
||||
[pytest]
|
||||
usefixtures = myfixture
|
||||
""")
|
||||
testdir.makeconftest("""
|
||||
import pytest
|
||||
@pytest.fixture
|
||||
def myfixture(monkeypatch):
|
||||
monkeypatch.setenv("HELLO", "WORLD")
|
||||
""")
|
||||
|
||||
p = testdir.maketxtfile("""
|
||||
>>> import os
|
||||
>>> os.environ["HELLO"]
|
||||
'WORLD'
|
||||
""")
|
||||
reprec = testdir.inline_run(p, )
|
||||
reprec.assertoutcome(passed=1)
|
||||
|
||||
@xfail_if_pdbpp_installed
|
||||
def test_doctestmodule_with_fixtures(self, testdir):
|
||||
|
||||
@@ -1,31 +0,0 @@
|
||||
"""Tests for fixtures with different scoping."""
|
||||
import py.code
|
||||
|
||||
|
||||
def test_fixture_finalizer(testdir):
|
||||
testdir.makeconftest("""
|
||||
import pytest
|
||||
import sys
|
||||
|
||||
@pytest.fixture
|
||||
def browser(request):
|
||||
|
||||
def finalize():
|
||||
sys.stdout.write('Finalized')
|
||||
request.addfinalizer(finalize)
|
||||
return {}
|
||||
""")
|
||||
b = testdir.mkdir("subdir")
|
||||
b.join("test_overriden_fixture_finalizer.py").write(py.code.Source("""
|
||||
import pytest
|
||||
@pytest.fixture
|
||||
def browser(browser):
|
||||
browser['visited'] = True
|
||||
return browser
|
||||
|
||||
def test_browser(browser):
|
||||
assert browser['visited'] is True
|
||||
"""))
|
||||
reprec = testdir.runpytest("-s")
|
||||
for test in ['test_browser']:
|
||||
reprec.stdout.fnmatch_lines('*Finalized*')
|
||||
@@ -1,28 +0,0 @@
|
||||
"""Tests for fixtures with different scoping."""
|
||||
|
||||
|
||||
def test_class_scope_with_normal_tests(testdir):
|
||||
testpath = testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
class Box:
|
||||
value = 0
|
||||
|
||||
@pytest.fixture(scope='class')
|
||||
def a(request):
|
||||
Box.value += 1
|
||||
return Box.value
|
||||
|
||||
def test_a(a):
|
||||
assert a == 1
|
||||
|
||||
class Test1:
|
||||
def test_b(self, a):
|
||||
assert a == 2
|
||||
|
||||
class Test2:
|
||||
def test_c(self, a):
|
||||
assert a == 3""")
|
||||
reprec = testdir.inline_run(testpath)
|
||||
for test in ['test_a', 'test_b', 'test_c']:
|
||||
assert reprec.matchreport(test).passed
|
||||
@@ -1,6 +1,5 @@
|
||||
import pytest
|
||||
import py, os, sys
|
||||
import subprocess
|
||||
import sys
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
@@ -36,14 +35,3 @@ def test_gen(testdir, anypython, standalone):
|
||||
result = standalone.run(anypython, testdir, p)
|
||||
assert result.ret != 0
|
||||
|
||||
def test_rundist(testdir, pytestconfig, standalone):
|
||||
pytestconfig.pluginmanager.skipifmissing("xdist")
|
||||
testdir.makepyfile("""
|
||||
def test_one():
|
||||
pass
|
||||
""")
|
||||
result = standalone.run(sys.executable, testdir, '-n', '3')
|
||||
assert result.ret == 0
|
||||
result.stdout.fnmatch_lines([
|
||||
"*1 passed*",
|
||||
])
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import py, pytest,os
|
||||
import py, pytest
|
||||
from _pytest.helpconfig import collectattr
|
||||
|
||||
def test_version(testdir, pytestconfig):
|
||||
|
||||
@@ -1,6 +1,8 @@
|
||||
import pytest
|
||||
# -*- coding: utf-8 -*-
|
||||
|
||||
from xml.dom import minidom
|
||||
import py, sys, os
|
||||
from _pytest.junitxml import LogXML
|
||||
|
||||
def runandparse(testdir, *args):
|
||||
resultpath = testdir.tmpdir.join("junit.xml")
|
||||
@@ -370,7 +372,7 @@ def test_nullbyte(testdir):
|
||||
assert False
|
||||
""")
|
||||
xmlf = testdir.tmpdir.join('junit.xml')
|
||||
result = testdir.runpytest('--junitxml=%s' % xmlf)
|
||||
testdir.runpytest('--junitxml=%s' % xmlf)
|
||||
text = xmlf.read()
|
||||
assert '\x00' not in text
|
||||
assert '#x00' in text
|
||||
@@ -386,7 +388,7 @@ def test_nullbyte_replace(testdir):
|
||||
assert False
|
||||
""")
|
||||
xmlf = testdir.tmpdir.join('junit.xml')
|
||||
result = testdir.runpytest('--junitxml=%s' % xmlf)
|
||||
testdir.runpytest('--junitxml=%s' % xmlf)
|
||||
text = xmlf.read()
|
||||
assert '#x0' in text
|
||||
|
||||
@@ -405,7 +407,6 @@ def test_invalid_xml_escape():
|
||||
unichr(65)
|
||||
except NameError:
|
||||
unichr = chr
|
||||
u = py.builtin._totext
|
||||
invalid = (0x00, 0x1, 0xB, 0xC, 0xE, 0x19,
|
||||
27, # issue #126
|
||||
0xD800, 0xDFFF, 0xFFFE, 0x0FFFF) #, 0x110000)
|
||||
@@ -425,8 +426,6 @@ def test_invalid_xml_escape():
|
||||
assert chr(i) == bin_xml_escape(unichr(i)).uniobj
|
||||
|
||||
def test_logxml_path_expansion(tmpdir, monkeypatch):
|
||||
from _pytest.junitxml import LogXML
|
||||
|
||||
home_tilde = py.path.local(os.path.expanduser('~')).join('test.xml')
|
||||
|
||||
xml_tilde = LogXML('~%stest.xml' % tmpdir.sep, None)
|
||||
@@ -463,3 +462,26 @@ def test_escaped_parametrized_names_xml(testdir):
|
||||
assert_attr(node,
|
||||
name="test_func[#x00]")
|
||||
|
||||
def test_unicode_issue368(testdir):
|
||||
path = testdir.tmpdir.join("test.xml")
|
||||
log = LogXML(str(path), None)
|
||||
ustr = py.builtin._totext("ВНИ!", "utf-8")
|
||||
class report:
|
||||
longrepr = ustr
|
||||
sections = []
|
||||
nodeid = "something"
|
||||
|
||||
# hopefully this is not too brittle ...
|
||||
log.pytest_sessionstart()
|
||||
log._opentestcase(report)
|
||||
log.append_failure(report)
|
||||
log.append_collect_failure(report)
|
||||
log.append_collect_skipped(report)
|
||||
log.append_error(report)
|
||||
report.longrepr = "filename", 1, ustr
|
||||
log.append_skipped(report)
|
||||
report.wasxfail = ustr
|
||||
log.append_skipped(report)
|
||||
log.pytest_sessionfinish()
|
||||
|
||||
|
||||
|
||||
@@ -13,7 +13,7 @@ class TestMark:
|
||||
|
||||
def test_pytest_mark_notcallable(self):
|
||||
mark = Mark()
|
||||
pytest.raises((AttributeError, TypeError), "mark()")
|
||||
pytest.raises((AttributeError, TypeError), mark)
|
||||
|
||||
def test_pytest_mark_bare(self):
|
||||
mark = Mark()
|
||||
@@ -35,7 +35,7 @@ class TestMark:
|
||||
mark = Mark()
|
||||
def f():
|
||||
pass
|
||||
marker = mark.world
|
||||
mark.world
|
||||
mark.world(x=3)(f)
|
||||
assert f.world.kwargs['x'] == 3
|
||||
mark.world(y=4)(f)
|
||||
@@ -100,6 +100,16 @@ def test_markers_option(testdir):
|
||||
"*a1some*another marker",
|
||||
])
|
||||
|
||||
def test_mark_on_pseudo_function(testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
|
||||
@pytest.mark.r(lambda x: 0/0)
|
||||
def test_hello():
|
||||
pass
|
||||
""")
|
||||
reprec = testdir.inline_run()
|
||||
reprec.assertoutcome(passed=1)
|
||||
|
||||
def test_strict_prohibits_unregistered_markers(testdir):
|
||||
testdir.makepyfile("""
|
||||
@@ -114,7 +124,7 @@ def test_strict_prohibits_unregistered_markers(testdir):
|
||||
"*unregisteredmark*not*registered*",
|
||||
])
|
||||
|
||||
@pytest.mark.multi(spec=[
|
||||
@pytest.mark.parametrize("spec", [
|
||||
("xyz", ("test_one",)),
|
||||
("xyz and xyz2", ()),
|
||||
("xyz2", ("test_two",)),
|
||||
@@ -137,7 +147,7 @@ def test_mark_option(spec, testdir):
|
||||
assert len(passed) == len(passed_result)
|
||||
assert list(passed) == list(passed_result)
|
||||
|
||||
@pytest.mark.multi(spec=[
|
||||
@pytest.mark.parametrize("spec", [
|
||||
("interface", ("test_interface",)),
|
||||
("not interface", ("test_nointer",)),
|
||||
])
|
||||
@@ -162,9 +172,11 @@ def test_mark_option_custom(spec, testdir):
|
||||
assert len(passed) == len(passed_result)
|
||||
assert list(passed) == list(passed_result)
|
||||
|
||||
@pytest.mark.multi(spec=[
|
||||
@pytest.mark.parametrize("spec", [
|
||||
("interface", ("test_interface",)),
|
||||
("not interface", ("test_nointer",)),
|
||||
("not interface", ("test_nointer", "test_pass")),
|
||||
("pass", ("test_pass",)),
|
||||
("not pass", ("test_interface", "test_nointer")),
|
||||
])
|
||||
def test_keyword_option_custom(spec, testdir):
|
||||
testdir.makepyfile("""
|
||||
@@ -172,6 +184,8 @@ def test_keyword_option_custom(spec, testdir):
|
||||
pass
|
||||
def test_nointer():
|
||||
pass
|
||||
def test_pass():
|
||||
pass
|
||||
""")
|
||||
opt, passed_result = spec
|
||||
rec = testdir.inline_run("-k", opt)
|
||||
@@ -181,6 +195,24 @@ def test_keyword_option_custom(spec, testdir):
|
||||
assert list(passed) == list(passed_result)
|
||||
|
||||
|
||||
@pytest.mark.parametrize("spec", [
|
||||
("None", ("test_func[None]",)),
|
||||
("1.3", ("test_func[1.3]",))
|
||||
])
|
||||
def test_keyword_option_parametrize(spec, testdir):
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
@pytest.mark.parametrize("arg", [None, 1.3])
|
||||
def test_func(arg):
|
||||
pass
|
||||
""")
|
||||
opt, passed_result = spec
|
||||
rec = testdir.inline_run("-k", opt)
|
||||
passed, skipped, fail = rec.listoutcomes()
|
||||
passed = [x.nodeid.split("::")[-1] for x in passed]
|
||||
assert len(passed) == len(passed_result)
|
||||
assert list(passed) == list(passed_result)
|
||||
|
||||
class TestFunctional:
|
||||
|
||||
def test_mark_per_function(self, testdir):
|
||||
@@ -364,7 +396,7 @@ class TestFunctional:
|
||||
assert len(deselected_tests) == 2
|
||||
|
||||
def test_keywords_at_node_level(self, testdir):
|
||||
p = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
@pytest.fixture(scope="session", autouse=True)
|
||||
def some(request):
|
||||
@@ -510,3 +542,4 @@ class TestKeywordSelection:
|
||||
|
||||
assert_test_is_not_selected("__")
|
||||
assert_test_is_not_selected("()")
|
||||
|
||||
|
||||
@@ -45,6 +45,12 @@ class TestSetattrWithImportPath:
|
||||
import _pytest
|
||||
assert _pytest.config.Config == 42
|
||||
|
||||
def test_unicode_string(self, monkeypatch):
|
||||
monkeypatch.setattr("_pytest.config.Config", 42)
|
||||
import _pytest
|
||||
assert _pytest.config.Config == 42
|
||||
monkeypatch.delattr("_pytest.config.Config")
|
||||
|
||||
def test_wrong_target(self, monkeypatch):
|
||||
pytest.raises(TypeError, lambda: monkeypatch.setattr(None, None))
|
||||
|
||||
|
||||
@@ -96,7 +96,7 @@ def test_nose_setup_func_failure(testdir):
|
||||
|
||||
|
||||
def test_nose_setup_func_failure_2(testdir):
|
||||
p = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
l = []
|
||||
|
||||
my_setup = 1
|
||||
|
||||
@@ -3,7 +3,6 @@ import sys
|
||||
import os
|
||||
import py, pytest
|
||||
from _pytest import config as parseopt
|
||||
from textwrap import dedent
|
||||
|
||||
@pytest.fixture
|
||||
def parser():
|
||||
@@ -12,7 +11,7 @@ def parser():
|
||||
class TestParser:
|
||||
def test_no_help_by_default(self, capsys):
|
||||
parser = parseopt.Parser(usage="xyz")
|
||||
pytest.raises(SystemExit, 'parser.parse(["-h"])')
|
||||
pytest.raises(SystemExit, lambda: parser.parse(["-h"]))
|
||||
out, err = capsys.readouterr()
|
||||
assert err.find("error: unrecognized arguments") != -1
|
||||
|
||||
@@ -65,9 +64,9 @@ class TestParser:
|
||||
assert group2 is group
|
||||
|
||||
def test_group_ordering(self, parser):
|
||||
group0 = parser.getgroup("1")
|
||||
group1 = parser.getgroup("2")
|
||||
group1 = parser.getgroup("3", after="1")
|
||||
parser.getgroup("1")
|
||||
parser.getgroup("2")
|
||||
parser.getgroup("3", after="1")
|
||||
groups = parser._groups
|
||||
groups_names = [x.name for x in groups]
|
||||
assert groups_names == list("132")
|
||||
@@ -104,7 +103,7 @@ class TestParser:
|
||||
assert getattr(args, parseopt.FILE_OR_DIR)[0] == py.path.local()
|
||||
|
||||
def test_parse_known_args(self, parser):
|
||||
args = parser.parse_known_args([py.path.local()])
|
||||
parser.parse_known_args([py.path.local()])
|
||||
parser.addoption("--hello", action="store_true")
|
||||
ns = parser.parse_known_args(["x", "--y", "--hello", "this"])
|
||||
assert ns.hello
|
||||
@@ -114,7 +113,7 @@ class TestParser:
|
||||
option = parser.parse([])
|
||||
assert option.hello == "x"
|
||||
del option.hello
|
||||
args = parser.parse_setoption([], option)
|
||||
parser.parse_setoption([], option)
|
||||
assert option.hello == "x"
|
||||
|
||||
def test_parse_setoption(self, parser):
|
||||
@@ -128,7 +127,7 @@ class TestParser:
|
||||
assert not args
|
||||
|
||||
def test_parse_special_destination(self, parser):
|
||||
x = parser.addoption("--ultimate-answer", type=int)
|
||||
parser.addoption("--ultimate-answer", type=int)
|
||||
args = parser.parse(['--ultimate-answer', '42'])
|
||||
assert args.ultimate_answer == 42
|
||||
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
import pytest
|
||||
|
||||
class TestPasting:
|
||||
def pytest_funcarg__pastebinlist(self, request):
|
||||
@@ -56,4 +55,4 @@ class TestRPCClient:
|
||||
assert proxy is not None
|
||||
assert proxy.__class__.__module__.startswith('xmlrpc')
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
import py, pytest
|
||||
|
||||
import py
|
||||
import sys
|
||||
|
||||
from test_doctest import xfail_if_pdbpp_installed
|
||||
@@ -62,7 +63,7 @@ class TestPDB:
|
||||
child.expect(".*i = 0")
|
||||
child.expect("(Pdb)")
|
||||
child.sendeof()
|
||||
rest = child.read()
|
||||
rest = child.read().decode("utf8")
|
||||
assert "1 failed" in rest
|
||||
assert "def test_1" not in rest
|
||||
if child.isalive():
|
||||
@@ -127,7 +128,7 @@ class TestPDB:
|
||||
child.expect("x = 3")
|
||||
child.expect("(Pdb)")
|
||||
child.sendeof()
|
||||
rest = child.read()
|
||||
rest = child.read().decode("utf-8")
|
||||
assert "1 failed" in rest
|
||||
assert "def test_1" in rest
|
||||
assert "hello17" in rest # out is captured
|
||||
@@ -144,7 +145,7 @@ class TestPDB:
|
||||
child.expect("test_1")
|
||||
child.expect("(Pdb)")
|
||||
child.sendeof()
|
||||
rest = child.read()
|
||||
rest = child.read().decode("utf8")
|
||||
assert "1 failed" in rest
|
||||
assert "reading from stdin while output" not in rest
|
||||
if child.isalive():
|
||||
@@ -162,7 +163,7 @@ class TestPDB:
|
||||
child.send("capsys.readouterr()\n")
|
||||
child.expect("hello1")
|
||||
child.sendeof()
|
||||
rest = child.read()
|
||||
child.read()
|
||||
if child.isalive():
|
||||
child.wait()
|
||||
|
||||
@@ -182,7 +183,7 @@ class TestPDB:
|
||||
child.expect("0")
|
||||
child.expect("(Pdb)")
|
||||
child.sendeof()
|
||||
rest = child.read()
|
||||
rest = child.read().decode("utf8")
|
||||
assert "1 failed" in rest
|
||||
if child.isalive():
|
||||
child.wait()
|
||||
@@ -206,7 +207,7 @@ class TestPDB:
|
||||
child.sendline('c')
|
||||
child.expect("x = 4")
|
||||
child.sendeof()
|
||||
rest = child.read()
|
||||
rest = child.read().decode("utf8")
|
||||
assert "1 failed" in rest
|
||||
assert "def test_1" in rest
|
||||
assert "hello17" in rest # out is captured
|
||||
@@ -238,6 +239,7 @@ class TestPDB:
|
||||
child.expect("x = 5")
|
||||
child.sendeof()
|
||||
child.wait()
|
||||
|
||||
def test_pdb_collection_failure_is_shown(self, testdir):
|
||||
p1 = testdir.makepyfile("""xxx """)
|
||||
result = testdir.runpytest("--pdb", p1)
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
import py
|
||||
import pytest
|
||||
import os, sys
|
||||
from _pytest.pytester import LineMatcher, LineComp, HookRecorder
|
||||
import os
|
||||
from _pytest.pytester import HookRecorder
|
||||
from _pytest.core import PluginManager
|
||||
|
||||
def test_reportrecorder(testdir):
|
||||
@@ -56,7 +56,6 @@ def test_reportrecorder(testdir):
|
||||
|
||||
|
||||
def test_parseconfig(testdir):
|
||||
import py
|
||||
config1 = testdir.parseconfig()
|
||||
config2 = testdir.parseconfig()
|
||||
assert config2 != config1
|
||||
|
||||
@@ -195,3 +195,23 @@ def test_no_resultlog_on_slaves(testdir):
|
||||
pytest_unconfigure(config)
|
||||
assert not hasattr(config, '_resultlog')
|
||||
|
||||
|
||||
def test_failure_issue380(testdir):
|
||||
testdir.makeconftest("""
|
||||
import pytest
|
||||
class MyCollector(pytest.File):
|
||||
def collect(self):
|
||||
raise ValueError()
|
||||
def repr_failure(self, excinfo):
|
||||
return "somestring"
|
||||
def pytest_collect_file(path, parent):
|
||||
return MyCollector(parent=parent, fspath=path)
|
||||
""")
|
||||
testdir.makepyfile("""
|
||||
def test_func():
|
||||
pass
|
||||
""")
|
||||
result = testdir.runpytest("--resultlog=log")
|
||||
assert result.ret == 1
|
||||
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
from __future__ import with_statement
|
||||
|
||||
import pytest, py, sys, os
|
||||
from _pytest import runner, main
|
||||
from py._code.code import ReprExceptionInfo
|
||||
|
||||
class TestSetupState:
|
||||
def test_setup(self, testdir):
|
||||
@@ -39,10 +40,39 @@ class TestSetupState:
|
||||
def setup_module(mod):
|
||||
raise ValueError(42)
|
||||
def test_func(): pass
|
||||
""")
|
||||
""") # noqa
|
||||
ss = runner.SetupState()
|
||||
pytest.raises(ValueError, "ss.prepare(item)")
|
||||
pytest.raises(ValueError, "ss.prepare(item)")
|
||||
pytest.raises(ValueError, lambda: ss.prepare(item))
|
||||
pytest.raises(ValueError, lambda: ss.prepare(item))
|
||||
|
||||
def test_teardown_multiple_one_fails(self, testdir):
|
||||
r = []
|
||||
def fin1(): r.append('fin1')
|
||||
def fin2(): raise Exception('oops')
|
||||
def fin3(): r.append('fin3')
|
||||
item = testdir.getitem("def test_func(): pass")
|
||||
ss = runner.SetupState()
|
||||
ss.addfinalizer(fin1, item)
|
||||
ss.addfinalizer(fin2, item)
|
||||
ss.addfinalizer(fin3, item)
|
||||
with pytest.raises(Exception) as err:
|
||||
ss._callfinalizers(item)
|
||||
assert err.value.args == ('oops',)
|
||||
assert r == ['fin3', 'fin1']
|
||||
|
||||
def test_teardown_multiple_fail(self, testdir):
|
||||
# Ensure the first exception is the one which is re-raised.
|
||||
# Ideally both would be reported however.
|
||||
def fin1(): raise Exception('oops1')
|
||||
def fin2(): raise Exception('oops2')
|
||||
item = testdir.getitem("def test_func(): pass")
|
||||
ss = runner.SetupState()
|
||||
ss.addfinalizer(fin1, item)
|
||||
ss.addfinalizer(fin2, item)
|
||||
with pytest.raises(Exception) as err:
|
||||
ss._callfinalizers(item)
|
||||
assert err.value.args == ('oops2',)
|
||||
|
||||
|
||||
class BaseFunctionalTests:
|
||||
def test_passfunction(self, testdir):
|
||||
@@ -428,7 +458,7 @@ def test_importorskip():
|
||||
def f():
|
||||
importorskip("asdlkj")
|
||||
try:
|
||||
sys = importorskip("sys")
|
||||
sys = importorskip("sys") # noqa
|
||||
assert sys == py.std.sys
|
||||
#path = py.test.importorskip("os.path")
|
||||
#assert path == py.std.os.path
|
||||
@@ -439,12 +469,14 @@ def test_importorskip():
|
||||
assert path.purebasename == "test_runner"
|
||||
pytest.raises(SyntaxError, "py.test.importorskip('x y z')")
|
||||
pytest.raises(SyntaxError, "py.test.importorskip('x=y')")
|
||||
path = importorskip("py", minversion=".".join(py.__version__))
|
||||
mod = py.std.types.ModuleType("hello123")
|
||||
mod.__version__ = "1.3"
|
||||
sys.modules["hello123"] = mod
|
||||
pytest.raises(pytest.skip.Exception, """
|
||||
py.test.importorskip("hello123", minversion="5.0")
|
||||
py.test.importorskip("hello123", minversion="1.3.1")
|
||||
""")
|
||||
mod2 = pytest.importorskip("hello123", minversion="1.3")
|
||||
assert mod2 == mod
|
||||
except pytest.skip.Exception:
|
||||
print(py.code.ExceptionInfo())
|
||||
py.test.fail("spurious skip")
|
||||
@@ -464,7 +496,7 @@ def test_pytest_cmdline_main(testdir):
|
||||
""")
|
||||
import subprocess
|
||||
popen = subprocess.Popen([sys.executable, str(p)], stdout=subprocess.PIPE)
|
||||
s = popen.stdout.read()
|
||||
popen.communicate()
|
||||
ret = popen.wait()
|
||||
assert ret == 0
|
||||
|
||||
|
||||
@@ -204,7 +204,7 @@ class TestNewSession(SessionTests):
|
||||
|
||||
def test_plugin_specify(testdir):
|
||||
testdir.chdir()
|
||||
config = pytest.raises(ImportError, """
|
||||
pytest.raises(ImportError, """
|
||||
testdir.parseconfig("-p", "nqweotexistent")
|
||||
""")
|
||||
#pytest.raises(ImportError,
|
||||
|
||||
@@ -1,8 +1,7 @@
|
||||
import pytest
|
||||
import sys
|
||||
|
||||
from _pytest.skipping import MarkEvaluator, folded_skips
|
||||
from _pytest.skipping import pytest_runtest_setup
|
||||
from _pytest.skipping import MarkEvaluator, folded_skips, pytest_runtest_setup
|
||||
from _pytest.runner import runtestprotocol
|
||||
|
||||
class TestEvaluator:
|
||||
@@ -108,7 +107,7 @@ class TestEvaluator:
|
||||
pass
|
||||
""")
|
||||
ev = MarkEvaluator(item, 'skipif')
|
||||
exc = pytest.raises(pytest.fail.Exception, "ev.istrue()")
|
||||
exc = pytest.raises(pytest.fail.Exception, ev.istrue)
|
||||
assert """Failed: you need to specify reason=STRING when using booleans as conditions.""" in exc.value.msg
|
||||
|
||||
def test_skipif_class(self, testdir):
|
||||
@@ -159,13 +158,14 @@ class TestXFail:
|
||||
@pytest.mark.xfail
|
||||
def test_func():
|
||||
assert 0
|
||||
def test_func2():
|
||||
pytest.xfail("hello")
|
||||
""")
|
||||
result = testdir.runpytest("--runxfail")
|
||||
assert result.ret == 1
|
||||
result.stdout.fnmatch_lines([
|
||||
"*def test_func():*",
|
||||
"*assert 0*",
|
||||
"*1 failed*",
|
||||
"*1 failed*1 pass*",
|
||||
])
|
||||
|
||||
def test_xfail_evalfalse_but_fails(self, testdir):
|
||||
@@ -188,7 +188,7 @@ class TestXFail:
|
||||
def test_this():
|
||||
assert 0
|
||||
""")
|
||||
result = testdir.runpytest(p, '-v')
|
||||
testdir.runpytest(p, '-v')
|
||||
#result.stdout.fnmatch_lines([
|
||||
# "*HINT*use*-r*"
|
||||
#])
|
||||
@@ -261,10 +261,7 @@ class TestXFail:
|
||||
"*reason:*hello*",
|
||||
])
|
||||
result = testdir.runpytest(p, "--runxfail")
|
||||
result.stdout.fnmatch_lines([
|
||||
"*def test_this():*",
|
||||
"*pytest.xfail*",
|
||||
])
|
||||
result.stdout.fnmatch_lines("*1 pass*")
|
||||
|
||||
def test_xfail_imperative_in_setup_function(self, testdir):
|
||||
p = testdir.makepyfile("""
|
||||
@@ -285,10 +282,10 @@ class TestXFail:
|
||||
"*reason:*hello*",
|
||||
])
|
||||
result = testdir.runpytest(p, "--runxfail")
|
||||
result.stdout.fnmatch_lines([
|
||||
"*def setup_function(function):*",
|
||||
"*pytest.xfail*",
|
||||
])
|
||||
result.stdout.fnmatch_lines("""
|
||||
*def test_this*
|
||||
*1 fail*
|
||||
""")
|
||||
|
||||
def xtest_dynamic_xfail_set_during_setup(self, testdir):
|
||||
p = testdir.makepyfile("""
|
||||
@@ -372,8 +369,9 @@ class TestSkipif:
|
||||
@pytest.mark.skipif("hasattr(os, 'sep')")
|
||||
def test_func():
|
||||
pass
|
||||
""")
|
||||
x = pytest.raises(pytest.skip.Exception, "pytest_runtest_setup(item)")
|
||||
""") # noqa
|
||||
x = pytest.raises(pytest.skip.Exception, lambda:
|
||||
pytest_runtest_setup(item))
|
||||
assert x.value.msg == "condition: hasattr(os, 'sep')"
|
||||
|
||||
|
||||
|
||||
@@ -39,7 +39,7 @@ def pytest_generate_tests(metafunc):
|
||||
|
||||
class TestTerminal:
|
||||
def test_pass_skip_fail(self, testdir, option):
|
||||
p = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
def test_ok():
|
||||
pass
|
||||
@@ -76,7 +76,6 @@ class TestTerminal:
|
||||
|
||||
def test_writeline(self, testdir, linecomp):
|
||||
modcol = testdir.getmodulecol("def test_one(): pass")
|
||||
stringio = py.io.TextIO()
|
||||
rep = TerminalReporter(modcol.config, file=linecomp.stringio)
|
||||
rep.write_fspath_result(py.path.local("xy.py"), '.')
|
||||
rep.write_line("hello world")
|
||||
@@ -97,7 +96,7 @@ class TestTerminal:
|
||||
])
|
||||
|
||||
def test_runtest_location_shown_before_test_starts(self, testdir):
|
||||
p1 = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
def test_1():
|
||||
import time
|
||||
time.sleep(20)
|
||||
@@ -108,7 +107,7 @@ class TestTerminal:
|
||||
child.kill(15)
|
||||
|
||||
def test_itemreport_subclasses_show_subclassed_file(self, testdir):
|
||||
p1 = testdir.makepyfile(test_p1="""
|
||||
testdir.makepyfile(test_p1="""
|
||||
class BaseTests:
|
||||
def test_p1(self):
|
||||
pass
|
||||
@@ -145,7 +144,7 @@ class TestTerminal:
|
||||
assert " <- " not in result.stdout.str()
|
||||
|
||||
def test_keyboard_interrupt(self, testdir, option):
|
||||
p = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
def test_foobar():
|
||||
assert 0
|
||||
def test_spamegg():
|
||||
@@ -172,7 +171,7 @@ class TestTerminal:
|
||||
def pytest_sessionstart():
|
||||
raise KeyboardInterrupt
|
||||
""")
|
||||
p = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
def test_foobar():
|
||||
pass
|
||||
""")
|
||||
@@ -214,7 +213,7 @@ class TestCollectonly:
|
||||
])
|
||||
|
||||
def test_collectonly_fatal(self, testdir):
|
||||
p1 = testdir.makeconftest("""
|
||||
testdir.makeconftest("""
|
||||
def pytest_collectstart(collector):
|
||||
assert 0, "urgs"
|
||||
""")
|
||||
@@ -233,7 +232,6 @@ class TestCollectonly:
|
||||
pass
|
||||
""")
|
||||
result = testdir.runpytest("--collect-only", p)
|
||||
stderr = result.stderr.str().strip()
|
||||
#assert stderr.startswith("inserting into sys.path")
|
||||
assert result.ret == 0
|
||||
result.stdout.fnmatch_lines([
|
||||
@@ -247,7 +245,6 @@ class TestCollectonly:
|
||||
def test_collectonly_error(self, testdir):
|
||||
p = testdir.makepyfile("import Errlkjqweqwe")
|
||||
result = testdir.runpytest("--collect-only", p)
|
||||
stderr = result.stderr.str().strip()
|
||||
assert result.ret == 1
|
||||
result.stdout.fnmatch_lines(py.code.Source("""
|
||||
*ERROR*
|
||||
@@ -293,7 +290,7 @@ def test_repr_python_version(monkeypatch):
|
||||
|
||||
class TestFixtureReporting:
|
||||
def test_setup_fixture_error(self, testdir):
|
||||
p = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
def setup_function(function):
|
||||
print ("setup func")
|
||||
assert 0
|
||||
@@ -311,7 +308,7 @@ class TestFixtureReporting:
|
||||
assert result.ret != 0
|
||||
|
||||
def test_teardown_fixture_error(self, testdir):
|
||||
p = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
def test_nada():
|
||||
pass
|
||||
def teardown_function(function):
|
||||
@@ -329,7 +326,7 @@ class TestFixtureReporting:
|
||||
])
|
||||
|
||||
def test_teardown_fixture_error_and_test_failure(self, testdir):
|
||||
p = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
def test_fail():
|
||||
assert 0, "failingfunc"
|
||||
|
||||
@@ -403,7 +400,7 @@ class TestTerminalFunctional:
|
||||
assert result.ret == 0
|
||||
|
||||
def test_header_trailer_info(self, testdir):
|
||||
p1 = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
def test_passes():
|
||||
pass
|
||||
""")
|
||||
@@ -486,20 +483,32 @@ class TestTerminalFunctional:
|
||||
|
||||
|
||||
def test_fail_extra_reporting(testdir):
|
||||
p = testdir.makepyfile("def test_this(): assert 0")
|
||||
result = testdir.runpytest(p)
|
||||
testdir.makepyfile("def test_this(): assert 0")
|
||||
result = testdir.runpytest()
|
||||
assert 'short test summary' not in result.stdout.str()
|
||||
result = testdir.runpytest(p, '-rf')
|
||||
result = testdir.runpytest('-rf')
|
||||
result.stdout.fnmatch_lines([
|
||||
"*test summary*",
|
||||
"FAIL*test_fail_extra_reporting*",
|
||||
])
|
||||
|
||||
def test_fail_reporting_on_pass(testdir):
|
||||
p = testdir.makepyfile("def test_this(): assert 1")
|
||||
result = testdir.runpytest(p, '-rf')
|
||||
testdir.makepyfile("def test_this(): assert 1")
|
||||
result = testdir.runpytest('-rf')
|
||||
assert 'short test summary' not in result.stdout.str()
|
||||
|
||||
def test_color_yes(testdir):
|
||||
testdir.makepyfile("def test_this(): assert 1")
|
||||
result = testdir.runpytest('--color=yes')
|
||||
assert 'test session starts' in result.stdout.str()
|
||||
assert '\x1b[1m' in result.stdout.str()
|
||||
|
||||
def test_color_no(testdir):
|
||||
testdir.makepyfile("def test_this(): assert 1")
|
||||
result = testdir.runpytest('--color=no')
|
||||
assert 'test session starts' in result.stdout.str()
|
||||
assert '\x1b[1m' not in result.stdout.str()
|
||||
|
||||
def test_getreportopt():
|
||||
class config:
|
||||
class option:
|
||||
@@ -522,7 +531,7 @@ def test_getreportopt():
|
||||
|
||||
def test_terminalreporter_reportopt_addopts(testdir):
|
||||
testdir.makeini("[pytest]\naddopts=-rs")
|
||||
p = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
def pytest_funcarg__tr(request):
|
||||
tr = request.config.pluginmanager.getplugin("terminalreporter")
|
||||
return tr
|
||||
@@ -570,7 +579,7 @@ class TestGenericReporting:
|
||||
provider to run e.g. distributed tests.
|
||||
"""
|
||||
def test_collect_fail(self, testdir, option):
|
||||
p = testdir.makepyfile("import xyz\n")
|
||||
testdir.makepyfile("import xyz\n")
|
||||
result = testdir.runpytest(*option.args)
|
||||
result.stdout.fnmatch_lines([
|
||||
"> import xyz",
|
||||
@@ -579,7 +588,7 @@ class TestGenericReporting:
|
||||
])
|
||||
|
||||
def test_maxfailures(self, testdir, option):
|
||||
p = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
def test_1():
|
||||
assert 0
|
||||
def test_2():
|
||||
@@ -597,7 +606,7 @@ class TestGenericReporting:
|
||||
|
||||
|
||||
def test_tb_option(self, testdir, option):
|
||||
p = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
def g():
|
||||
raise IndexError
|
||||
@@ -678,7 +687,7 @@ def test_fdopen_kept_alive_issue124(testdir):
|
||||
])
|
||||
|
||||
def test_tbstyle_native_setup_error(testdir):
|
||||
p = testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
import pytest
|
||||
@pytest.fixture
|
||||
def setup_error_fixture():
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
import py, pytest
|
||||
import os
|
||||
|
||||
from _pytest.tmpdir import tmpdir, TempdirHandler
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ def test_simple_unittest(testdir):
|
||||
assert reprec.matchreport("test_failing").failed
|
||||
|
||||
def test_runTest_method(testdir):
|
||||
testpath=testdir.makepyfile("""
|
||||
testdir.makepyfile("""
|
||||
import unittest
|
||||
pytest_plugins = "pytest_unittest"
|
||||
class MyTestCaseWithRunTest(unittest.TestCase):
|
||||
@@ -236,7 +236,7 @@ def test_setup_class(testdir):
|
||||
reprec.assertoutcome(passed=3)
|
||||
|
||||
|
||||
@pytest.mark.multi(type=['Error', 'Failure'])
|
||||
@pytest.mark.parametrize("type", ['Error', 'Failure'])
|
||||
def test_testcase_adderrorandfailure_defers(testdir, type):
|
||||
testdir.makepyfile("""
|
||||
from unittest import TestCase
|
||||
@@ -256,7 +256,7 @@ def test_testcase_adderrorandfailure_defers(testdir, type):
|
||||
result = testdir.runpytest()
|
||||
assert 'should not raise' not in result.stdout.str()
|
||||
|
||||
@pytest.mark.multi(type=['Error', 'Failure'])
|
||||
@pytest.mark.parametrize("type", ['Error', 'Failure'])
|
||||
def test_testcase_custom_exception_info(testdir, type):
|
||||
testdir.makepyfile("""
|
||||
from unittest import TestCase
|
||||
@@ -310,9 +310,10 @@ def test_module_level_pytestmark(testdir):
|
||||
reprec.assertoutcome(skipped=1)
|
||||
|
||||
|
||||
def test_testcase_skip_property(testdir):
|
||||
def test_trial_testcase_skip_property(testdir):
|
||||
pytest.importorskip('twisted.trial.unittest')
|
||||
testpath = testdir.makepyfile("""
|
||||
import unittest
|
||||
from twisted.trial import unittest
|
||||
class MyTestCase(unittest.TestCase):
|
||||
skip = 'dont run'
|
||||
def test_func(self):
|
||||
@@ -321,9 +322,11 @@ def test_testcase_skip_property(testdir):
|
||||
reprec = testdir.inline_run(testpath, "-s")
|
||||
reprec.assertoutcome(skipped=1)
|
||||
|
||||
def test_testfunction_skip_property(testdir):
|
||||
|
||||
def test_trial_testfunction_skip_property(testdir):
|
||||
pytest.importorskip('twisted.trial.unittest')
|
||||
testpath = testdir.makepyfile("""
|
||||
import unittest
|
||||
from twisted.trial import unittest
|
||||
class MyTestCase(unittest.TestCase):
|
||||
def test_func(self):
|
||||
pass
|
||||
@@ -333,6 +336,32 @@ def test_testfunction_skip_property(testdir):
|
||||
reprec.assertoutcome(skipped=1)
|
||||
|
||||
|
||||
def test_trial_testcase_todo_property(testdir):
|
||||
pytest.importorskip('twisted.trial.unittest')
|
||||
testpath = testdir.makepyfile("""
|
||||
from twisted.trial import unittest
|
||||
class MyTestCase(unittest.TestCase):
|
||||
todo = 'dont run'
|
||||
def test_func(self):
|
||||
assert 0
|
||||
""")
|
||||
reprec = testdir.inline_run(testpath, "-s")
|
||||
reprec.assertoutcome(skipped=1)
|
||||
|
||||
|
||||
def test_trial_testfunction_todo_property(testdir):
|
||||
pytest.importorskip('twisted.trial.unittest')
|
||||
testpath = testdir.makepyfile("""
|
||||
from twisted.trial import unittest
|
||||
class MyTestCase(unittest.TestCase):
|
||||
def test_func(self):
|
||||
assert 0
|
||||
test_func.todo = 'dont run'
|
||||
""")
|
||||
reprec = testdir.inline_run(testpath, "-s")
|
||||
reprec.assertoutcome(skipped=1)
|
||||
|
||||
|
||||
class TestTrialUnittest:
|
||||
def setup_class(cls):
|
||||
cls.ut = pytest.importorskip("twisted.trial.unittest")
|
||||
|
||||
43
tox.ini
43
tox.ini
@@ -1,21 +1,21 @@
|
||||
[tox]
|
||||
distshare={homedir}/.tox/distshare
|
||||
envlist=py25,py26,py27,py27-nobyte,py32,py33,py27-xdist,trial
|
||||
envlist=flakes,py26,py27,pypy,py27-pexpect,py33-pexpect,py27-nobyte,py32,py33,py27-xdist,py33-xdist,trial
|
||||
|
||||
[testenv]
|
||||
changedir=testing
|
||||
commands= py.test --lsof -rfsxX --junitxml={envlogdir}/junit-{envname}.xml []
|
||||
deps=
|
||||
pexpect
|
||||
nose
|
||||
|
||||
[testenv:genscript]
|
||||
changedir=.
|
||||
commands= py.test --genscript=pytest1
|
||||
|
||||
[testenv:py25]
|
||||
setenv =
|
||||
PIP_INSECURE=1
|
||||
[testenv:flakes]
|
||||
changedir=
|
||||
deps = pytest-flakes>=0.2
|
||||
commands = py.test --flakes -m flakes _pytest testing
|
||||
|
||||
[testenv:py27-xdist]
|
||||
changedir=.
|
||||
@@ -27,6 +27,28 @@ commands=
|
||||
py.test -n3 -rfsxX \
|
||||
--junitxml={envlogdir}/junit-{envname}.xml testing
|
||||
|
||||
[testenv:py33-xdist]
|
||||
changedir=.
|
||||
basepython=python3.3
|
||||
deps={[testenv:py27-xdist]deps}
|
||||
commands=
|
||||
py.test -n3 -rfsxX \
|
||||
--junitxml={envlogdir}/junit-{envname}.xml testing
|
||||
|
||||
[testenv:py27-pexpect]
|
||||
changedir=testing
|
||||
basepython=python2.7
|
||||
deps=pexpect
|
||||
commands=
|
||||
py.test -rfsxX test_pdb.py test_terminal.py test_unittest.py
|
||||
|
||||
[testenv:py33-pexpect]
|
||||
changedir=testing
|
||||
basepython=python2.7
|
||||
deps={[testenv:py27-pexpect]deps}
|
||||
commands=
|
||||
py.test -rfsxX test_pdb.py test_terminal.py test_unittest.py
|
||||
|
||||
[testenv:py27-nobyte]
|
||||
changedir=.
|
||||
basepython=python2.7
|
||||
@@ -41,7 +63,6 @@ commands=
|
||||
[testenv:trial]
|
||||
changedir=.
|
||||
deps=twisted
|
||||
pexpect
|
||||
commands=
|
||||
py.test -rsxf \
|
||||
--junitxml={envlogdir}/junit-{envname}.xml {posargs:testing/test_unittest.py}
|
||||
@@ -50,14 +71,6 @@ changedir=.
|
||||
commands=py.test --doctest-modules _pytest
|
||||
deps=
|
||||
|
||||
[testenv:py32]
|
||||
deps=
|
||||
nose
|
||||
|
||||
[testenv:py33]
|
||||
deps=
|
||||
nose
|
||||
|
||||
[testenv:doc]
|
||||
basepython=python
|
||||
changedir=doc/en
|
||||
@@ -97,7 +110,7 @@ commands=
|
||||
minversion=2.0
|
||||
plugins=pytester
|
||||
#--pyargs --doctest-modules --ignore=.tox
|
||||
addopts= -rxs
|
||||
addopts= -rxsX
|
||||
rsyncdirs=tox.ini pytest.py _pytest testing
|
||||
python_files=test_*.py *_test.py testing/*/*.py
|
||||
python_classes=Test Acceptance
|
||||
|
||||
Reference in New Issue
Block a user