Compare commits

...

42 Commits
2.0.0 ... 2.0.1

Author SHA1 Message Date
holger krekel
98cd8edb71 regen docs with examples 2011-02-07 11:45:37 +01:00
holger krekel
e7b69a2ac0 doc fixes 2011-02-07 11:39:05 +01:00
holger krekel
74b9ebc1cd accept a left out "()" for ids on command line for better compatibility with pytest.vim 2011-02-07 11:09:42 +01:00
holger krekel
3004fe3915 fix the last committed laxation of a test 2011-02-04 23:20:27 +01:00
holger krekel
eb225456d7 laxer test for also passing it with pypy 2011-02-04 22:51:05 +01:00
holger krekel
b04f87b1a6 add release announcement 2011-02-03 15:58:22 +01:00
holger krekel
35b0b376f0 bumping version to pytest-2.0.1, regen docs and examples 2011-02-03 15:14:50 +01:00
holger krekel
762ea71f67 fix error reporting issue when a "pyc" file has no relating "py" 2011-01-27 21:11:21 +01:00
holger krekel
adacd3491d fix test related to "not in" 2011-01-27 11:36:12 +01:00
Floris Bruynooghe
709d5e3f2c Improve the "not in" assertion output
This cleans up the generic diff and tailors the output more to this
specific assertion (based on feedback from hpk).
2011-01-25 20:47:34 +00:00
holger krekel
d8d88ede65 refine and unify initial capturing - now works also if the logging module
is already used from an early-loaded conftest.py file (prior to option parsing)
2011-01-18 12:51:21 +01:00
holger krekel
b8f0d10f80 fix a pypy related regression - re-allow self.NAME style collection tree customization 2011-01-18 12:47:31 +01:00
holger krekel
aea4d1bd7a fix regression with yield-based tests (hopefully) 2011-01-14 13:30:36 +01:00
holger krekel
2b750074f4 fix typo (thanks derdon) 2011-01-13 23:50:10 +01:00
holger krekel
88cfaebbcb fix issue12 - show plugin versions with "--version" and "--traceconfig" and also document how to add extra information to reporting test header 2011-01-12 19:39:36 +01:00
holger krekel
426e056d2b fix issue10 - numpy arrays should now work better in assertion expressions
(or any other objects which have an exception-raising __nonzero__ method ...)
2011-01-12 19:17:54 +01:00
holger krekel
4445685285 pypy doesn't neccessarily honour -OO it seems, let's not test assertions there. 2011-01-12 18:57:40 +01:00
holger krekel
ae9b7a8bea use pypi.testrun.org so that py>1.4.0 gets picked up correctly 2011-01-12 18:03:55 +01:00
holger krekel
5daef51000 fix issue14 : it was actually issue14 instead of issue8 that was fixed with
the older https://bitbucket.org/hpk42/pytest/changeset/1c3eb86502b3

please try out with the usual "pip install -i http://pypi.testrun.org -U pytest"
2011-01-12 17:35:09 +01:00
holger krekel
647b56614a fix issue17 by requiring an update to pylib which helps to fix it 2011-01-12 17:21:11 +01:00
holger krekel
1b3fb3d229 fix issue15 - tests for python3/nose-1.0 combo work now 2011-01-11 17:27:34 +01:00
holger krekel
170c78cef9 remove same-conftest.py detection - does more harm than good
(see mail from Ralf Schmitt on py-dev)
2011-01-11 15:54:47 +01:00
Benjamin Peterson
8f5d837ef6 duplicate word 2010-12-23 14:56:38 -06:00
holger krekel
0ec5f3fd6c small improvements, add assertion improvement to CHANGELOG 2010-12-10 12:28:04 +01:00
Floris Bruynooghe
8631c1f57a Add "not in" to detailed explanations
This simply uses difflib to compare the text without the offending
string to the full text.

Also ensures the summary line uses all space available.  But the
terminal width is still hardcoded.
2010-12-10 01:03:26 +00:00
holger krekel
821f493378 check docstring at test time instead of runtime, improve and test warning on assertion turned off (thanks FND for reporting) 2010-12-09 11:00:31 +01:00
holger krekel
4086d46378 fix issue11 doc typo (thanks Eduardo) 2010-12-07 15:25:25 +01:00
holger krekel
a15983cb33 rather named the new hook cmdline_preparse 2010-12-07 12:34:18 +01:00
holger krekel
9ab256c296 make getvalueorskip() be hidden in skip-reporting. also bump version. 2010-12-07 12:18:24 +01:00
holger krekel
7db9e98b55 introduce a pytest_cmdline_processargs hook to modify/add dynamically to command line arguments. 2010-12-07 12:14:12 +01:00
holger krekel
e6541ed14e bump version and fix changelog issue reference 2010-12-06 19:01:50 +01:00
holger krekel
fc4f72cb1f fix issue7 - assert failure inside doctest doesn't prettyprint
unexpected exceptions are now reported within the doctest failure
representation context.
2010-12-06 19:00:30 +01:00
holger krekel
feea4ea3d5 fix hasplugin() method / test failures 2010-12-06 18:32:04 +01:00
holger krekel
513482f4f7 fix issue9 wrong XPass with failing setup/teardown function of xfail marked test
now when setup or teardown of a test item/function fails and the test
is marked "xfail" it will show up as an xfail-ed test.
2010-12-06 18:20:47 +01:00
holger krekel
2e80512bb8 fix issue8 : avoid errors caused by logging module wanting to close already closed streams.
The issue arose if logging was initialized while capturing was enabled
and then capturing streams were closed before process exit, leading
to the logging module to complain.
2010-12-06 16:56:12 +01:00
holger krekel
c7531705fc refine plugin registration, allow new "-p no:NAME" way to prevent/undo plugin registration 2010-12-06 16:54:42 +01:00
holger krekel
752965c298 add some docs and new projects 2010-12-06 10:41:20 +01:00
holger krekel
96a687b97c make pytest test suite pypy ready 2010-11-27 16:40:52 +01:00
holger krekel
d894bae281 bumping version to a dev version, run tests by using python PyPI by default 2010-11-26 13:37:00 +01:00
holger krekel
f1fc6e5eb6 regenerating examples 2010-11-26 13:26:56 +01:00
Benjamin Peterson
ca72c162c8 need double colon here 2010-11-25 20:55:32 -06:00
holger krekel
9bcb66d9a5 Added tag 2.0.0 for changeset e9e127acd6f0 2010-11-25 21:03:08 +01:00
62 changed files with 1164 additions and 475 deletions

View File

@@ -31,3 +31,4 @@ c59d3fa8681a5b5966b8375b16fccd64a3a8dbeb 1.3.3
79ef6377705184c55633d456832eea318fedcf61 1.3.4
79ef6377705184c55633d456832eea318fedcf61 1.3.4
90fffd35373e9f125af233f78b19416f0938d841 1.3.4
e9e127acd6f0497324ef7f40cfb997cad4c4cd17 2.0.0

View File

@@ -1,4 +1,53 @@
Changes between 1.3.4 and 2.0.0dev0
Changes between 2.0.0 and 2.0.1
----------------------------------------------
- refine and unify initial capturing so that it works nicely
even if the logging module is used on an early-loaded conftest.py
file or plugin.
- allow to omit "()" in test ids to allow for uniform test ids
as produced by Alfredo's nice pytest.vim plugin.
- fix issue12 - show plugin versions with "--version" and
"--traceconfig" and also document how to add extra information
to reporting test header
- fix issue17 (import-* reporting issue on python3) by
requiring py>1.4.0 (1.4.1 is going to include it)
- fix issue10 (numpy arrays truth checking) by refining
assertion interpretation in py lib
- fix issue15: make nose compatibility tests compatible
with python3 (now that nose-1.0 supports python3)
- remove somewhat surprising "same-conftest" detection because
it ignores conftest.py when they appear in several subdirs.
- improve assertions ("not in"), thanks Floris Bruynooghe
- improve behaviour/warnings when running on top of "python -OO"
(assertions and docstrings are turned off, leading to potential
false positives)
- introduce a pytest_cmdline_processargs(args) hook
to allow dynamic computation of command line arguments.
This fixes a regression because py.test prior to 2.0
allowed to set command line options from conftest.py
files which so far pytest-2.0 only allowed from ini-files now.
- fix issue7: assert failures in doctest modules.
unexpected failures in doctests will not generally
show nicer, i.e. within the doctest failing context.
- fix issue9: setup/teardown functions for an xfail-marked
test will report as xfail if they fail but report as normally
passing (not xpassing) if they succeed. This only is true
for "direct" setup/teardown invocations because teardown_class/
teardown_module cannot closely relate to a single test.
- fix issue14: no logging errors at process exit
- refinements to "collecting" output on non-ttys
- refine internal plugin registration and --traceconfig output
- introduce a mechanism to prevent/unregister plugins from the
command line, see http://pytest.org/plugins.html#cmdunregister
- activate resultlog plugin by default
- fix regression wrt yielded tests which due to the
collection-before-running semantics were not
setup as with pytest 1.3.4. Note, however, that
the recommended and much cleaner way to do test
parametraization remains the "pytest_generate_tests"
mechanism, see the docs.
Changes between 1.3.4 and 2.0.0
----------------------------------------------
- pytest-2.0 is now its own package and depends on pylib-2.0

View File

@@ -17,8 +17,8 @@ def pytest_configure(config):
# turn call the hooks defined here as part of the
# DebugInterpreter.
config._monkeypatch = m = monkeypatch()
warn_about_missing_assertion()
if not config.getvalue("noassert") and not config.getvalue("nomagic"):
warn_about_missing_assertion()
def callbinrepr(op, left, right):
hook_result = config.hook.pytest_assertrepr_compare(
config=config, op=op, left=left, right=right)
@@ -38,9 +38,8 @@ def warn_about_missing_assertion():
except AssertionError:
pass
else:
py.std.warnings.warn("Assertions are turned off!"
" (are you using python -O?)")
sys.stderr.write("WARNING: failing tests may report as passing because "
"assertions are turned off! (are you using python -O?)\n")
# Provide basestring in python3
try:
@@ -51,8 +50,9 @@ except NameError:
def pytest_assertrepr_compare(op, left, right):
"""return specialised explanations for some operators/operands"""
left_repr = py.io.saferepr(left, maxsize=30)
right_repr = py.io.saferepr(right, maxsize=30)
width = 80 - 15 - len(op) - 2 # 15 chars indentation, 1 space around op
left_repr = py.io.saferepr(left, maxsize=width/2)
right_repr = py.io.saferepr(right, maxsize=width-len(left_repr))
summary = '%s %s %s' % (left_repr, op, right_repr)
issequence = lambda x: isinstance(x, (list, tuple))
@@ -72,6 +72,9 @@ def pytest_assertrepr_compare(op, left, right):
elif isdict(left) and isdict(right):
explanation = _diff_text(py.std.pprint.pformat(left),
py.std.pprint.pformat(right))
elif op == 'not in':
if istext(left) and istext(right):
explanation = _notin_text(left, right)
except py.builtin._sysex:
raise
except:
@@ -155,3 +158,22 @@ def _compare_eq_set(left, right):
for item in diff_right:
explanation.append(py.io.saferepr(item))
return explanation
def _notin_text(term, text):
index = text.find(term)
head = text[:index]
tail = text[index+len(term):]
correct_text = head + tail
diff = _diff_text(correct_text, text)
newdiff = ['%s is contained here:' % py.io.saferepr(term, maxsize=42)]
for line in diff:
if line.startswith('Skipping'):
continue
if line.startswith('- '):
continue
if line.startswith('+ '):
newdiff.append(' ' + line[2:])
else:
newdiff.append(line)
return newdiff

View File

@@ -19,14 +19,14 @@ def addouterr(rep, outerr):
if content:
repr.addsection("Captured std%s" % secname, content.rstrip())
def pytest_configure(config):
config.pluginmanager.register(CaptureManager(), 'capturemanager')
def pytest_unconfigure(config):
# registered in config.py during early conftest.py loading
capman = config.pluginmanager.getplugin('capturemanager')
while capman._method2capture:
name, cap = capman._method2capture.popitem()
cap.reset()
# XXX logging module may wants to close it itself on process exit
# otherwise we could do finalization here and call "reset()".
cap.suspend()
class NoCapture:
def startall(self):
@@ -65,6 +65,14 @@ class CaptureManager:
else:
raise ValueError("unknown capturing method: %r" % method)
def _getmethod_preoptionparse(self, args):
if '-s' in args or "--capture=no" in args:
return "no"
elif hasattr(os, 'dup') and '--capture=sys' not in args:
return "fd"
else:
return "sys"
def _getmethod(self, config, fspath):
if config.option.capture:
method = config.option.capture

View File

@@ -34,7 +34,7 @@ class Parser:
def getgroup(self, name, description="", after=None):
""" get (or create) a named option Group.
:name: unique name of the option group.
:description: long description for --help output.
:after: name of other group, used for ordering --help output.
@@ -125,7 +125,6 @@ class Conftest(object):
self._onimport = onimport
self._conftestpath2mod = {}
self._confcutdir = confcutdir
self._md5cache = {}
def setinitial(self, args):
""" try to find a first anchor path for looking up global values
@@ -179,14 +178,7 @@ class Conftest(object):
else:
conftestpath = path.join("conftest.py")
if conftestpath.check(file=1):
key = conftestpath.computehash()
# XXX logging about conftest loading
if key not in self._md5cache:
clist.append(self.importconftest(conftestpath))
self._md5cache[key] = conftestpath
else:
# use some kind of logging
print ("WARN: not loading %s" % conftestpath)
clist.append(self.importconftest(conftestpath))
clist[:0] = self.getconftestmodules(dp)
self._path2confmods[path] = clist
# be defensive: avoid changes from caller side to
@@ -278,13 +270,16 @@ class Config(object):
def _setinitialconftest(self, args):
# capture output during conftest init (#issue93)
name = hasattr(os, 'dup') and 'StdCaptureFD' or 'StdCapture'
cap = getattr(py.io, name)()
from _pytest.capture import CaptureManager
capman = CaptureManager()
self.pluginmanager.register(capman, 'capturemanager')
# will be unregistered in capture.py's unconfigure()
capman.resumecapture(capman._getmethod_preoptionparse(args))
try:
try:
self._conftest.setinitial(args)
finally:
out, err = cap.reset()
out, err = capman.suspendcapture() # logging might have got it
except:
sys.stdout.write(out)
sys.stderr.write(err)
@@ -300,11 +295,13 @@ class Config(object):
if addopts:
args[:] = self.getini("addopts") + args
self._checkversion()
self.pluginmanager.consider_preparse(args)
self.pluginmanager.consider_setuptools_entrypoints()
self.pluginmanager.consider_env()
self.pluginmanager.consider_preparse(args)
self._setinitialconftest(args)
self.pluginmanager.do_addoption(self._parser)
if addopts:
self.hook.pytest_cmdline_preparse(config=self, args=args)
def _checkversion(self):
minver = self.inicfg.get('minversion', None)
@@ -397,7 +394,9 @@ class Config(object):
return self._getconftest(name, path, check=False)
def getvalueorskip(self, name, path=None):
""" (deprecated) return getvalue(name) or call py.test.skip if no value exists. """
""" (deprecated) return getvalue(name) or call
py.test.skip if no value exists. """
__tracebackhide__ = True
try:
val = self.getvalue(name, path)
if val is None:
@@ -421,7 +420,7 @@ def getcfg(args, inibasenames):
if 'pytest' in iniconfig.sections:
return iniconfig['pytest']
return {}
def findupwards(current, basename):
current = py.path.local(current)
while 1:

View File

@@ -11,11 +11,9 @@ assert py.__version__.split(".")[:2] >= ['1', '4'], ("installation problem: "
"%s is too old, remove or upgrade 'py'" % (py.__version__))
default_plugins = (
"config mark session terminal runner python pdb unittest capture skipping "
"config mark main terminal runner python pdb unittest capture skipping "
"tmpdir monkeypatch recwarn pastebin helpconfig nose assertion genscript "
"junitxml doctest").split()
IMPORTPREFIX = "pytest_"
"junitxml resultlog doctest").split()
class TagTracer:
def __init__(self, prefix="[pytest] "):
@@ -65,6 +63,7 @@ class PluginManager(object):
self._plugins = []
self._hints = []
self.trace = TagTracer().get("pluginmanage")
self._plugin_distinfo = []
if os.environ.get('PYTEST_DEBUG'):
err = sys.stderr
encoding = getattr(err, 'encoding', 'utf8')
@@ -79,20 +78,12 @@ class PluginManager(object):
for spec in default_plugins:
self.import_plugin(spec)
def _getpluginname(self, plugin, name):
if name is None:
if hasattr(plugin, '__name__'):
name = plugin.__name__.split(".")[-1]
else:
name = id(plugin)
return name
def register(self, plugin, name=None, prepend=False):
assert not self.isregistered(plugin), plugin
assert not self.isregistered(plugin), plugin
name = self._getpluginname(plugin, name)
name = name or getattr(plugin, '__name__', str(id(plugin)))
if name in self._name2plugin:
return False
#self.trace("registering", name, plugin)
self._name2plugin[name] = plugin
self.call_plugin(plugin, "pytest_addhooks", {'pluginmanager': self})
self.hook.pytest_plugin_registered(manager=self, plugin=plugin)
@@ -112,7 +103,7 @@ class PluginManager(object):
del self._name2plugin[name]
def isregistered(self, plugin, name=None):
if self._getpluginname(plugin, name) in self._name2plugin:
if self.getplugin(name) is not None:
return True
for val in self._name2plugin.values():
if plugin == val:
@@ -129,18 +120,15 @@ class PluginManager(object):
py.test.skip("plugin %r is missing" % name)
def hasplugin(self, name):
try:
self.getplugin(name)
return True
except KeyError:
return False
return bool(self.getplugin(name))
def getplugin(self, name):
if name is None:
return None
try:
return self._name2plugin[name]
except KeyError:
impname = canonical_importname(name)
return self._name2plugin[impname]
return self._name2plugin.get("_pytest." + name, None)
# API for bootstrapping
#
@@ -160,19 +148,29 @@ class PluginManager(object):
except ImportError:
return # XXX issue a warning
for ep in iter_entry_points('pytest11'):
name = canonical_importname(ep.name)
if name in self._name2plugin:
name = ep.name
if name.startswith("pytest_"):
name = name[7:]
if ep.name in self._name2plugin or name in self._name2plugin:
continue
try:
plugin = ep.load()
except DistributionNotFound:
continue
self._plugin_distinfo.append((ep.dist, plugin))
self.register(plugin, name=name)
def consider_preparse(self, args):
for opt1,opt2 in zip(args, args[1:]):
if opt1 == "-p":
self.import_plugin(opt2)
if opt2.startswith("no:"):
name = opt2[3:]
if self.getplugin(name) is not None:
self.unregister(None, name=name)
self._name2plugin[name] = -1
else:
if self.getplugin(opt2) is None:
self.import_plugin(opt2)
def consider_conftest(self, conftestmodule):
if self.register(conftestmodule, name=conftestmodule.__file__):
@@ -186,15 +184,19 @@ class PluginManager(object):
for spec in attr:
self.import_plugin(spec)
def import_plugin(self, spec):
assert isinstance(spec, str)
modname = canonical_importname(spec)
if modname in self._name2plugin:
def import_plugin(self, modname):
assert isinstance(modname, str)
if self.getplugin(modname) is not None:
return
try:
#self.trace("importing", modname)
mod = importplugin(modname)
except KeyboardInterrupt:
raise
except ImportError:
if modname.startswith("pytest_"):
return self.import_plugin(modname[7:])
raise
except:
e = py.std.sys.exc_info()[1]
if not hasattr(py.test, 'skip'):
@@ -290,34 +292,18 @@ class PluginManager(object):
return MultiCall(methods=self.listattr(methname, plugins=[plugin]),
kwargs=kwargs, firstresult=True).execute()
def canonical_importname(name):
if '.' in name:
return name
name = name.lower()
if not name.startswith(IMPORTPREFIX):
name = IMPORTPREFIX + name
return name
def importplugin(importspec):
#print "importing", importspec
name = importspec
try:
return __import__(importspec, None, None, '__doc__')
mod = "_pytest." + name
return __import__(mod, None, None, '__doc__')
except ImportError:
e = py.std.sys.exc_info()[1]
if str(e).find(importspec) == -1:
raise
name = importspec
try:
if name.startswith("pytest_"):
name = importspec[7:]
return __import__("_pytest.%s" %(name), None, None, '__doc__')
except ImportError:
e = py.std.sys.exc_info()[1]
if str(e).find(name) == -1:
raise
# show the original exception, not the failing internal one
return __import__(importspec, None, None, '__doc__')
#e = py.std.sys.exc_info()[1]
#if str(e).find(name) == -1:
# raise
pass #
return __import__(importspec, None, None, '__doc__')
class MultiCall:
""" execute a call into multiple python functions/methods. """
@@ -378,9 +364,6 @@ class HookRelay:
added = False
for name, method in vars(hookspecs).items():
if name.startswith(prefix):
if not method.__doc__:
raise ValueError("docstring required for hook %r, in %r"
% (method, hookspecs))
firstresult = getattr(method, 'firstresult', False)
hc = HookCaller(self, name, firstresult=firstresult)
setattr(self, name, hc)

View File

@@ -34,7 +34,9 @@ class ReprFailDoctest(TerminalRepr):
class DoctestItem(pytest.Item):
def repr_failure(self, excinfo):
if excinfo.errisinstance(py.std.doctest.DocTestFailure):
doctest = py.std.doctest
if excinfo.errisinstance((doctest.DocTestFailure,
doctest.UnexpectedException)):
doctestfailure = excinfo.value
example = doctestfailure.example
test = doctestfailure.test
@@ -50,12 +52,15 @@ class DoctestItem(pytest.Item):
for line in filelines[i:lineno]:
lines.append("%03d %s" % (i+1, line))
i += 1
lines += checker.output_difference(example,
doctestfailure.got, REPORT_UDIFF).split("\n")
if excinfo.errisinstance(doctest.DocTestFailure):
lines += checker.output_difference(example,
doctestfailure.got, REPORT_UDIFF).split("\n")
else:
inner_excinfo = py.code.ExceptionInfo(excinfo.value.exc_info)
lines += ["UNEXPECTED EXCEPTION: %s" %
repr(inner_excinfo.value)]
return ReprFailDoctest(reprlocation, lines)
elif excinfo.errisinstance(py.std.doctest.UnexpectedException):
excinfo = py.code.ExceptionInfo(excinfo.value.exc_info)
return super(DoctestItem, self).repr_failure(excinfo)
else:
return super(DoctestItem, self).repr_failure(excinfo)

View File

@@ -28,6 +28,10 @@ def pytest_cmdline_main(config):
p = py.path.local(pytest.__file__)
sys.stderr.write("This is py.test version %s, imported from %s\n" %
(pytest.__version__, p))
plugininfo = getpluginversioninfo(config)
if plugininfo:
for line in plugininfo:
sys.stderr.write(line + "\n")
return 0
elif config.option.help:
config.pluginmanager.do_configure(config)
@@ -69,18 +73,37 @@ conftest_options = [
('pytest_plugins', 'list of plugin names to load'),
]
def getpluginversioninfo(config):
lines = []
plugininfo = config.pluginmanager._plugin_distinfo
if plugininfo:
lines.append("setuptools registered plugins:")
for dist, plugin in plugininfo:
loc = getattr(plugin, '__file__', repr(plugin))
content = "%s-%s at %s" % (dist.project_name, dist.version, loc)
lines.append(" " + content)
return lines
def pytest_report_header(config):
lines = []
if config.option.debug or config.option.traceconfig:
lines.append("using: pytest-%s pylib-%s" %
(pytest.__version__,py.__version__))
verinfo = getpluginversioninfo(config)
if verinfo:
lines.extend(verinfo)
if config.option.traceconfig:
lines.append("active plugins:")
plugins = []
items = config.pluginmanager._name2plugin.items()
for name, plugin in items:
lines.append(" %-20s: %s" %(name, repr(plugin)))
if hasattr(plugin, '__file__'):
r = plugin.__file__
else:
r = repr(plugin)
lines.append(" %-20s: %s" %(name, r))
return lines

View File

@@ -19,6 +19,9 @@ def pytest_cmdline_parse(pluginmanager, args):
"""return initialized config object, parsing the specified args. """
pytest_cmdline_parse.firstresult = True
def pytest_cmdline_preparse(config, args):
"""modify command line arguments before option parsing. """
def pytest_addoption(parser):
"""add optparse-style options and ini-style config values via calls
to ``parser.addoption`` and ``parser.addini(...)``.
@@ -202,7 +205,6 @@ def pytest_doctest_prepare_content(content):
""" return processed content for a given doctest"""
pytest_doctest_prepare_content.firstresult = True
# -------------------------------------------------------------------------
# error handling and internal debugging hooks
# -------------------------------------------------------------------------

View File

@@ -40,7 +40,7 @@ def pytest_addoption(parser):
help="only load conftest.py's relative to specified dir.")
group = parser.getgroup("debugconfig",
"test process debugging and configuration")
"test session debugging and configuration")
group.addoption('--basetemp', dest="basetemp", default=None, metavar="dir",
help="base temporary directory for this test run.")
@@ -152,6 +152,7 @@ class Node(object):
Module = compatproperty("Module")
Class = compatproperty("Class")
Instance = compatproperty("Instance")
Function = compatproperty("Function")
File = compatproperty("File")
Item = compatproperty("Item")
@@ -336,7 +337,7 @@ class Session(FSCollector):
def __init__(self, config):
super(Session, self).__init__(py.path.local(), parent=None,
config=config, session=self)
self.config.pluginmanager.register(self, name="session", prepend=True)
assert self.config.pluginmanager.register(self, name="session", prepend=True)
self._testsfailed = 0
self.shouldstop = False
self.trace = config.trace.root.get("collection")
@@ -448,7 +449,7 @@ class Session(FSCollector):
p = p.dirpath()
else:
p = p.new(basename=p.purebasename+".py")
return p
return str(p)
def _parsearg(self, arg):
""" return (fspath, names) tuple after checking the file exists. """
@@ -494,9 +495,15 @@ class Session(FSCollector):
node.ihook.pytest_collectstart(collector=node)
rep = node.ihook.pytest_make_collect_report(collector=node)
if rep.passed:
has_matched = False
for x in rep.result:
if x.name == name:
resultnodes.extend(self.matchnodes([x], nextnames))
has_matched = True
# XXX accept IDs that don't have "()" for class instances
if not has_matched and len(rep.result) == 1 and x.name == "()":
nextnames.insert(0, name)
resultnodes.extend(self.matchnodes([x], nextnames))
node.ihook.pytest_collectreport(report=rep)
return resultnodes

View File

@@ -6,7 +6,7 @@ import re
import inspect
import time
from fnmatch import fnmatch
from _pytest.session import Session
from _pytest.main import Session
from py.builtin import print_
from _pytest.core import HookRelay
@@ -515,6 +515,8 @@ class TmpTestdir:
def spawn(self, cmd, expect_timeout=10.0):
pexpect = py.test.importorskip("pexpect", "2.4")
if hasattr(sys, 'pypy_version_info') and '64' in py.std.platform.machine():
pytest.skip("pypy-64 bit not supported")
logfile = self.tmpdir.join("spawn.out")
child = pexpect.spawn(cmd, logfile=logfile.open("w"))
child.timeout = expect_timeout

View File

@@ -73,7 +73,7 @@ def pytest_pycollect_makeitem(__multicall__, collector, name, obj):
if collector._istestclasscandidate(name, obj):
#if hasattr(collector.obj, 'unittest'):
# return # we assume it's a mixin class for a TestCase derived one
return Class(name, parent=collector)
return collector.Class(name, parent=collector)
elif collector.funcnamefilter(name) and hasattr(obj, '__call__'):
if is_generator(obj):
return Generator(name, parent=collector)
@@ -160,7 +160,7 @@ class PyCollectorMixin(PyobjMixin, pytest.Collector):
for prefix in self.config.getini("python_functions"):
if name.startswith(prefix):
return True
def classnamefilter(self, name):
for prefix in self.config.getini("python_classes"):
if name.startswith(prefix):
@@ -214,11 +214,11 @@ class PyCollectorMixin(PyobjMixin, pytest.Collector):
plugins = self.getplugins() + extra
gentesthook.pcall(plugins, metafunc=metafunc)
if not metafunc._calls:
return Function(name, parent=self)
return self.Function(name, parent=self)
l = []
for callspec in metafunc._calls:
subname = "%s[%s]" %(name, callspec.id)
function = Function(name=subname, parent=self,
function = self.Function(name=subname, parent=self,
callspec=callspec, callobj=funcobj, keywords={callspec.id:True})
l.append(function)
return l
@@ -272,7 +272,7 @@ class Module(pytest.File, PyCollectorMixin):
class Class(PyCollectorMixin, pytest.Collector):
def collect(self):
return [Instance(name="()", parent=self)]
return [self.Instance(name="()", parent=self)]
def setup(self):
setup_class = getattr(self.obj, 'setup_class', None)
@@ -304,7 +304,9 @@ class FunctionMixin(PyobjMixin):
name = 'setup_method'
else:
name = 'setup_function'
if isinstance(self.parent, Instance):
if hasattr(self, '_preservedparent'):
obj = self._preservedparent
elif isinstance(self.parent, Instance):
obj = self.parent.newinstance()
self.obj = self._getobj()
else:
@@ -377,7 +379,8 @@ class Generator(FunctionMixin, PyCollectorMixin, pytest.Collector):
# invoke setup/teardown on popular request
# (induced by the common "test_*" naming shared with normal tests)
self.config._setupstate.prepare(self)
# see FunctionMixin.setup and test_setupstate_is_preserved_134
self._preservedparent = self.parent.obj
l = []
seen = {}
for i, x in enumerate(self.obj()):
@@ -391,7 +394,7 @@ class Generator(FunctionMixin, PyCollectorMixin, pytest.Collector):
if name in seen:
raise ValueError("%r generated tests with non-unique name %r" %(self, name))
seen[name] = True
l.append(Function(name, self, args=args, callobj=call))
l.append(self.Function(name, self, args=args, callobj=call))
return l
def getcallargs(self, obj):
@@ -525,10 +528,10 @@ class Metafunc:
def addcall(self, funcargs=None, id=_notexists, param=_notexists):
""" add a new call to the underlying test function during the
collection phase of a test run.
:arg funcargs: argument keyword dictionary used when invoking
the test function.
:arg id: used for reporting and identification purposes. If you
don't supply an `id` the length of the currently
list of calls to the test function will be used.
@@ -559,7 +562,7 @@ class FuncargRequest:
class LookupError(LookupError):
""" error on performing funcarg request. """
def __init__(self, pyfuncitem):
self._pyfuncitem = pyfuncitem
if hasattr(pyfuncitem, '_requestparam'):
@@ -587,7 +590,7 @@ class FuncargRequest:
def module(self):
""" module where the test function was collected. """
return self._pyfuncitem.getparent(pytest.Module).obj
@property
def cls(self):
""" class (can be None) where the test function was collected. """
@@ -603,7 +606,7 @@ class FuncargRequest:
def config(self):
""" the pytest config object associated with this request. """
return self._pyfuncitem.config
@property
def fspath(self):
""" the file system path of the test module which collected this test. """
@@ -730,7 +733,7 @@ class FuncargRequest:
raise self.LookupError(msg)
def showfuncargs(config):
from _pytest.session import Session
from _pytest.main import Session
session = Session(config)
session.perform_collect()
if session.items:
@@ -777,7 +780,7 @@ def getlocation(function, curdir):
def raises(ExpectedException, *args, **kwargs):
""" assert that a code block/function call raises @ExpectedException
and raise a failure exception otherwise.
If using Python 2.5 or above, you may use this function as a
context manager::
@@ -800,7 +803,7 @@ def raises(ExpectedException, *args, **kwargs):
A third possibility is to use a string which which will
be executed::
>>> raises(ZeroDivisionError, "f(0)")
<ExceptionInfo ...>
"""
@@ -849,3 +852,4 @@ class RaisesContext(object):
pytest.fail("DID NOT RAISE")
self.excinfo.__init__(tp)
return issubclass(self.excinfo.type, self.ExpectedException)

View File

@@ -1,12 +1,12 @@
""" (disabled by default) create result information in a plain text file. """
import py
from py.builtin import print_
def pytest_addoption(parser):
group = parser.getgroup("resultlog", "resultlog plugin options")
group.addoption('--resultlog', action="store", dest="resultlog", metavar="path", default=None,
help="path for machine-readable result log.")
group = parser.getgroup("terminal reporting", "resultlog plugin options")
group.addoption('--resultlog', action="store", dest="resultlog",
metavar="path", default=None,
help="path for machine-readable result log.")
def pytest_configure(config):
resultlog = config.option.resultlog
@@ -52,9 +52,9 @@ class ResultLog(object):
self.logfile = logfile # preferably line buffered
def write_log_entry(self, testpath, lettercode, longrepr):
print_("%s %s" % (lettercode, testpath), file=self.logfile)
py.builtin.print_("%s %s" % (lettercode, testpath), file=self.logfile)
for line in longrepr.splitlines():
print_(" %s" % line, file=self.logfile)
py.builtin.print_(" %s" % line, file=self.logfile)
def log_outcome(self, report, lettercode, longrepr):
testpath = getattr(report, 'nodeid', None)

View File

@@ -97,19 +97,19 @@ def pytest_runtest_makereport(__multicall__, item, call):
rep.keywords['xfail'] = "reason: " + call.excinfo.value.msg
rep.outcome = "skipped"
return rep
if call.when == "call":
rep = __multicall__.execute()
evalxfail = getattr(item, '_evalxfail')
if not item.config.getvalue("runxfail") and evalxfail.istrue():
if call.excinfo:
rep.outcome = "skipped"
else:
rep.outcome = "failed"
rep = __multicall__.execute()
evalxfail = item._evalxfail
if not item.config.option.runxfail and evalxfail.istrue():
if call.excinfo:
rep.outcome = "skipped"
rep.keywords['xfail'] = evalxfail.getexplanation()
else:
if 'xfail' in rep.keywords:
del rep.keywords['xfail']
return rep
elif call.when == "call":
rep.outcome = "failed"
rep.keywords['xfail'] = evalxfail.getexplanation()
else:
if 'xfail' in rep.keywords:
del rep.keywords['xfail']
return rep
# called by terminalreporter progress reporting
def pytest_report_teststatus(report):

View File

@@ -136,6 +136,9 @@ class TerminalReporter:
self._tw.line()
self.currentfspath = None
def write(self, content, **markup):
self._tw.write(content, **markup)
def write_line(self, line, **markup):
line = str(line)
self.ensure_newline()
@@ -215,7 +218,7 @@ class TerminalReporter:
def pytest_collection(self):
if not self.hasmarkup:
self.write_line("collecting ...", bold=True)
self.write("collecting ... ", bold=True)
def pytest_collectreport(self, report):
if report.failed:

View File

@@ -15,7 +15,7 @@ ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest
regen:
COLUMNS=76 regendoc --update *.txt */*.txt
PYTHONDONTWRITEBYTECODE=1 COLUMNS=76 regendoc --update *.txt */*.txt
help:
@echo "Please use \`make <target>' where <target> is one of"
@@ -40,7 +40,7 @@ clean:
-rm -rf $(BUILDDIR)/*
install: clean html
rsync -avz _build/html/ code:www-pytest/2.0.0
rsync -avz _build/html/ code:www-pytest/
html:
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html

View File

@@ -7,7 +7,7 @@
released versions in the <a href="http://pypi.python.org/pypi/pytest">Python
Package Index</a>.</p>
{% else %}
<p><b><a href="{{ pathto('announce/index')}}">{{ version }} release</a></b>
<p><b><a href="{{ pathto('announce/index')}}">{{ release }} release</a></b>
[<a href="{{ pathto('changelog') }}">Changelog</a>]</p>
<p>
<a href="http://pypi.python.org/pypi/pytest">pytest on PyPI</a>

View File

@@ -5,5 +5,6 @@ Release announcements
.. toctree::
:maxdepth: 2
release-2.0.1
release-2.0.0

View File

@@ -0,0 +1,72 @@
py.test 2.0.1: bug fixes
===========================================================================
Welcome to pytest-2.0.1, a maintenance and bug fix release. For detailed
changes see below. pytest is a mature testing tool for Python,
supporting CPython 2.4-3.2, Jython and latest PyPy interpreters. See
docs here implementation. See docs with examples here:
http://pytest.org/
A note on packaging: pytest used to part of the "py" distribution up
until version py-1.3.4 but this has changed now: pytest-2.0.X only
contains py.test related code and is expected to be backward-compatible
to existing test code. If you want to install pytest, just type one of::
pip install -U pytest
easy_install -U pytest
Many thanks to all issue reporters and people asking questions or
complaining. Particular thanks to Floris Bruynooghe and Ronny Pfannschmidt
for their great coding contributions and many others for feedback and help.
best,
holger krekel
Changes between 2.0.0 and 2.0.1
----------------------------------------------
- refine and unify initial capturing so that it works nicely
even if the logging module is used on an early-loaded conftest.py
file or plugin.
- fix issue12 - show plugin versions with "--version" and
"--traceconfig" and also document how to add extra information
to reporting test header
- fix issue17 (import-* reporting issue on python3) by
requiring py>1.4.0 (1.4.1 is going to include it)
- fix issue10 (numpy arrays truth checking) by refining
assertion interpretation in py lib
- fix issue15: make nose compatibility tests compatible
with python3 (now that nose-1.0 supports python3)
- remove somewhat surprising "same-conftest" detection because
it ignores conftest.py when they appear in several subdirs.
- improve assertions ("not in"), thanks Floris Bruynooghe
- improve behaviour/warnings when running on top of "python -OO"
(assertions and docstrings are turned off, leading to potential
false positives)
- introduce a pytest_cmdline_processargs(args) hook
to allow dynamic computation of command line arguments.
This fixes a regression because py.test prior to 2.0
allowed to set command line options from conftest.py
files which so far pytest-2.0 only allowed from ini-files now.
- fix issue7: assert failures in doctest modules.
unexpected failures in doctests will not generally
show nicer, i.e. within the doctest failing context.
- fix issue9: setup/teardown functions for an xfail-marked
test will report as xfail if they fail but report as normally
passing (not xpassing) if they succeed. This only is true
for "direct" setup/teardown invocations because teardown_class/
teardown_module cannot closely relate to a single test.
- fix issue14: no logging errors at process exit
- refinements to "collecting" output on non-ttys
- refine internal plugin registration and --traceconfig output
- introduce a mechanism to prevent/unregister plugins from the
command line, see http://pytest.org/plugins.html#cmdunregister
- activate resultlog plugin by default
- fix regression wrt yielded tests which due to the
collection-before-running semantics were not
setup as with pytest 1.3.4. Note, however, that
the recommended and much cleaner way to do test
parametraization remains the "pytest_generate_tests"
mechanism, see the docs.

View File

@@ -23,21 +23,21 @@ assertion fails you will see the value of ``x``::
$ py.test test_assert1.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_assert1.py
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 1 items
test_assert1.py F
================================= FAILURES =================================
______________________________ test_function _______________________________
def test_function():
> assert f() == 4
E assert 3 == 4
E + where 3 = f()
test_assert1.py:5: AssertionError
========================= 1 failed in 0.03 seconds =========================
========================= 1 failed in 0.02 seconds =========================
Reporting details about the failing assertion is achieved by re-evaluating
the assert expression and recording intermediate values.
@@ -49,8 +49,8 @@ line::
assert f.read() != '...'
This might fail but when re-interpretation comes along it might pass.
You can rewrite this (and any other expression with side effects) easily, though:
This might fail but when re-interpretation comes along it might pass. You can
rewrite this (and any other expression with side effects) easily, though::
content = f.read()
assert content != '...'
@@ -105,14 +105,14 @@ if you run this module::
$ py.test test_assert2.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_assert2.py
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 1 items
test_assert2.py F
================================= FAILURES =================================
___________________________ test_set_comparison ____________________________
def test_set_comparison():
set1 = set("1308")
set2 = set("8035")
@@ -122,7 +122,7 @@ if you run this module::
E '1'
E Extra items in the right set:
E '5'
test_assert2.py:5: AssertionError
========================= 1 failed in 0.02 seconds =========================

View File

@@ -30,25 +30,25 @@ You can ask for available builtin or project-custom
captures writes to sys.stdout/sys.stderr and makes
them available successively via a ``capsys.readouterr()`` method
which returns a ``(out, err)`` tuple of captured snapshot strings.
capfd
captures writes to file descriptors 1 and 2 and makes
snapshotted ``(out, err)`` string tuples available
via the ``capsys.readouterr()`` method. If the underlying
platform does not have ``os.dup`` (e.g. Jython) tests using
this funcarg will automatically skip.
tmpdir
return a temporary directory path object
unique to each test function invocation,
created as a sub directory of the base temporary
directory. The returned object is a `py.path.local`_
path object.
monkeypatch
The returned ``monkeypatch`` funcarg provides these
helper methods to modify objects, dictionaries or os.environ::
monkeypatch.setattr(obj, name, value, raising=True)
monkeypatch.delattr(obj, name, raising=True)
monkeypatch.setitem(mapping, name, value)
@@ -56,15 +56,15 @@ You can ask for available builtin or project-custom
monkeypatch.setenv(name, value, prepend=False)
monkeypatch.delenv(name, value, raising=True)
monkeypatch.syspath_prepend(path)
All modifications will be undone when the requesting
test function finished its execution. The ``raising``
parameter determines if a KeyError or AttributeError
will be raised if the set/deletion operation has no target.
recwarn
Return a WarningsRecorder instance that provides these methods:
* ``pop(category=None)``: return last warning matching the category.
* ``clear()``: clear list of warnings

View File

@@ -12,6 +12,47 @@ attempts to read from it. This is important if some code paths in
test otherwise might lead to waiting for input - which is usually
not desired when running automated tests.
.. _printdebugging:
Using print statements for debugging
---------------------------------------------------
One primary benefit of the default capturing of stdout/stderr output
is that you can use print statements for debugging::
# content of test_module.py
def setup_function(function):
print ("setting up %s" % function)
def test_func1():
assert True
def test_func2():
assert False
and running this module will show you precisely the output
of the failing function and hide the other one::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 2 items
test_module.py .F
================================= FAILURES =================================
________________________________ test_func2 ________________________________
def test_func2():
> assert False
E assert False
test_module.py:9: AssertionError
----------------------------- Captured stdout ------------------------------
setting up <function test_func2 at 0x2897d70>
==================== 1 failed, 1 passed in 0.02 seconds ====================
Setting capturing methods or disabling capturing
-------------------------------------------------
@@ -35,14 +76,6 @@ You can influence output capturing mechanisms from the command line::
py.test --capture=sys # replace sys.stdout/stderr with in-mem files
py.test --capture=fd # also point filedescriptors 1 and 2 to temp file
If you set capturing values in a conftest file like this::
# conftest.py
option_capture = 'fd'
then all tests in that directory will execute with "fd" style capturing.
_ `printdebugging`:
Accessing captured output from a test function
---------------------------------------------------

View File

@@ -49,7 +49,7 @@ copyright = u'2010, holger krekel et aliter'
# built documents.
#
# The short X.Y version.
version = '2.0.0'
version = '2.0'
# The full version, including alpha/beta/rc tags.
import py, pytest
assert py.path.local().relto(py.path.local(pytest.__file__).dirpath().dirpath())

View File

@@ -15,7 +15,7 @@ Contact channels
- #pylib on irc.freenode.net IRC channel for random questions.
- private mail to Holger.Krekel at gmail com if you have sensitive issues to communicate
- private mail to Holger.Krekel at gmail com if you want to communicate sensitive issues
- `commit mailing list`_

View File

@@ -15,11 +15,11 @@ python test modules)::
py.test --doctest-modules
You can make these changes permanent in your project by
putting them into a conftest.py file like this::
putting them into a pytest.ini file like this::
# content of conftest.py
option_doctestmodules = True
option_doctestglob = "*.rst"
# content of pytest.ini
[pytest]
addopts = --doctest-modules
If you then have a text file like this::
@@ -35,7 +35,7 @@ and another like this::
# content of mymodule.py
def something():
""" a doctest in a docstring
>>> something()
>>> something()
42
"""
return 42
@@ -44,7 +44,9 @@ then you can just invoke ``py.test`` without command line options::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: /tmp/doc-exec-66
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 1 items
============================= in 0.00 seconds =============================
mymodule.py .
========================= 1 passed in 0.02 seconds =========================

View File

@@ -77,6 +77,22 @@ class TestSpecialisedExplanations(object):
def test_in_list(self):
assert 1 in [0, 2, 3, 4, 5]
def test_not_in_text_multiline(self):
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
assert 'foo' not in text
def test_not_in_text_single(self):
text = 'single foo line'
assert 'foo' not in text
def test_not_in_text_single_long(self):
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
assert 'foo' not in text
def test_not_in_text_single_long_term(self):
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
assert 'f'*70 not in text
def test_attribute():
class Foo(object):

View File

@@ -9,6 +9,6 @@ def test_failure_demo_fails_properly(testdir):
failure_demo.copy(testdir.tmpdir.join(failure_demo.basename))
result = testdir.runpytest(target)
result.stdout.fnmatch_lines([
"*35 failed*"
"*39 failed*"
])
assert result.ret != 0

View File

@@ -49,15 +49,15 @@ You can now run the test::
$ py.test test_sample.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_sample.py
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 1 items
test_sample.py F
================================= FAILURES =================================
_______________________________ test_answer ________________________________
mysetup = <conftest.MySetup instance at 0x16f5998>
mysetup = <conftest.MySetup instance at 0x2526440>
def test_answer(mysetup):
app = mysetup.myapp()
@@ -122,12 +122,12 @@ Running it yields::
$ py.test test_ssh.py -rs
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_ssh.py
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 1 items
test_ssh.py s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-107/conftest.py:22: specify ssh host with --ssh
SKIP [1] /tmp/doc-exec-166/conftest.py:22: specify ssh host with --ssh
======================== 1 skipped in 0.02 seconds =========================

View File

@@ -27,8 +27,8 @@ now execute the test specification::
nonpython $ py.test test_simple.yml
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_simple.yml
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 2 items
test_simple.yml .F
@@ -37,8 +37,6 @@ now execute the test specification::
usecase execution failed
spec failed: 'some': 'other'
no further details known at this point.
========================= short test summary info ==========================
FAIL test_simple.yml::hello
==================== 1 failed, 1 passed in 0.06 seconds ====================
You get one dot for the passing ``sub1: sub1`` check and one failure.
@@ -58,8 +56,8 @@ reporting in ``verbose`` mode::
nonpython $ py.test -v
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30 -- /home/hpk/venv/0/bin/python
test path 1: /home/hpk/p/pytest/doc/example/nonpython
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 -- /home/hpk/venv/0/bin/python
collecting ... collected 2 items
test_simple.yml:1: usecase: ok PASSED
test_simple.yml:1: usecase: hello FAILED
@@ -69,8 +67,6 @@ reporting in ``verbose`` mode::
usecase execution failed
spec failed: 'some': 'other'
no further details known at this point.
========================= short test summary info ==========================
FAIL test_simple.yml::hello
==================== 1 failed, 1 passed in 0.06 seconds ====================
While developing your custom test collection and execution it's also

View File

@@ -41,11 +41,12 @@ Running it means we are two tests for each test functions, using
the respective settings::
$ py.test -q
collecting ... collected 4 items
F..F
================================= FAILURES =================================
_________________________ TestClass.test_equals[0] _________________________
self = <test_parametrize.TestClass instance at 0x128a638>, a = 1, b = 2
self = <test_parametrize.TestClass instance at 0x1521440>, a = 1, b = 2
def test_equals(self, a, b):
> assert a == b
@@ -54,7 +55,7 @@ the respective settings::
test_parametrize.py:17: AssertionError
______________________ TestClass.test_zerodivision[1] ______________________
self = <test_parametrize.TestClass instance at 0x1296440>, a = 3, b = 2
self = <test_parametrize.TestClass instance at 0x158aa70>, a = 3, b = 2
def test_zerodivision(self, a, b):
> pytest.raises(ZeroDivisionError, "a/b")
@@ -97,11 +98,12 @@ for parametrizing test methods::
Running it gives similar results as before::
$ py.test -q test_parametrize2.py
collecting ... collected 4 items
F..F
================================= FAILURES =================================
_________________________ TestClass.test_equals[0] _________________________
self = <test_parametrize2.TestClass instance at 0x1dbcc68>, a = 1, b = 2
self = <test_parametrize2.TestClass instance at 0x22a77e8>, a = 1, b = 2
@params([dict(a=1, b=2), dict(a=3, b=3), ])
def test_equals(self, a, b):
@@ -111,7 +113,7 @@ Running it gives similar results as before::
test_parametrize2.py:19: AssertionError
______________________ TestClass.test_zerodivision[1] ______________________
self = <test_parametrize2.TestClass instance at 0x1dd0488>, a = 3, b = 2
self = <test_parametrize2.TestClass instance at 0x2332a70>, a = 3, b = 2
@params([dict(a=1, b=0), dict(a=3, b=2)])
def test_zerodivision(self, a, b):
@@ -138,5 +140,6 @@ with different sets of arguments for its three arguments::
Running it (with Python-2.4 through to Python2.7 installed)::
. $ py.test -q multipython.py
collecting ... collected 75 items
....s....s....s....ssssss....s....s....s....ssssss....s....s....s....ssssss
48 passed, 27 skipped in 2.55 seconds
48 passed, 27 skipped in 2.09 seconds

View File

@@ -13,11 +13,10 @@ get on the terminal - we are working on that):
assertion $ py.test failure_demo.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev38
collecting ...
collected 35 items
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 39 items
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
failure_demo.py FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
================================= FAILURES =================================
____________________________ test_generative[0] ____________________________
@@ -31,7 +30,7 @@ get on the terminal - we are working on that):
failure_demo.py:15: AssertionError
_________________________ TestFailing.test_simple __________________________
self = <failure_demo.TestFailing object at 0x2c9da90>
self = <failure_demo.TestFailing object at 0x1b42950>
def test_simple(self):
def f():
@@ -41,21 +40,21 @@ get on the terminal - we are working on that):
> assert f() == g()
E assert 42 == 43
E + where 42 = <function f at 0x2c447d0>()
E + and 43 = <function g at 0x2c44cf8>()
E + where 42 = <function f at 0x1b33de8>()
E + and 43 = <function g at 0x1b47140>()
failure_demo.py:28: AssertionError
____________________ TestFailing.test_simple_multiline _____________________
self = <failure_demo.TestFailing object at 0x2c9dc90>
self = <failure_demo.TestFailing object at 0x1b42c50>
def test_simple_multiline(self):
otherfunc_multi(
42,
> 6*9)
failure_demo.py:33:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:33:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 42, b = 54
@@ -67,19 +66,19 @@ get on the terminal - we are working on that):
failure_demo.py:12: AssertionError
___________________________ TestFailing.test_not ___________________________
self = <failure_demo.TestFailing object at 0x2c93f10>
self = <failure_demo.TestFailing object at 0x1b42190>
def test_not(self):
def f():
return 42
> assert not f()
E assert not 42
E + where 42 = <function f at 0x2ca1050>()
E + where 42 = <function f at 0x1b47320>()
failure_demo.py:38: AssertionError
_________________ TestSpecialisedExplanations.test_eq_text _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x2c9d9d0>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b42150>
def test_eq_text(self):
> assert 'spam' == 'eggs'
@@ -90,7 +89,7 @@ get on the terminal - we are working on that):
failure_demo.py:42: AssertionError
_____________ TestSpecialisedExplanations.test_eq_similar_text _____________
self = <failure_demo.TestSpecialisedExplanations object at 0x2a04e90>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b48610>
def test_eq_similar_text(self):
> assert 'foo 1 bar' == 'foo 2 bar'
@@ -103,7 +102,7 @@ get on the terminal - we are working on that):
failure_demo.py:45: AssertionError
____________ TestSpecialisedExplanations.test_eq_multiline_text ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x2c9d710>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b38f90>
def test_eq_multiline_text(self):
> assert 'foo\nspam\nbar' == 'foo\neggs\nbar'
@@ -116,13 +115,13 @@ get on the terminal - we are working on that):
failure_demo.py:48: AssertionError
______________ TestSpecialisedExplanations.test_eq_long_text _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x2c9db10>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b42cd0>
def test_eq_long_text(self):
a = '1'*100 + 'a' + '2'*100
b = '1'*100 + 'b' + '2'*100
> assert a == b
E assert '111111111111...2222222222222' == '111111111111...2222222222222'
E assert '111111111111...2222222222222' == '1111111111111...2222222222222'
E Skipping 90 identical leading characters in diff
E Skipping 91 identical trailing characters in diff
E - 1111111111a222222222
@@ -133,13 +132,13 @@ get on the terminal - we are working on that):
failure_demo.py:53: AssertionError
_________ TestSpecialisedExplanations.test_eq_long_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x2caf950>
self = <failure_demo.TestSpecialisedExplanations object at 0x1ba6a90>
def test_eq_long_text_multiline(self):
a = '1\n'*100 + 'a' + '2\n'*100
b = '1\n'*100 + 'b' + '2\n'*100
> assert a == b
E assert '1\n1\n1\n1\n...n2\n2\n2\n2\n' == '1\n1\n1\n1\n...n2\n2\n2\n2\n'
E assert '1\n1\n1\n1\n...n2\n2\n2\n2\n' == '1\n1\n1\n1\n1...n2\n2\n2\n2\n'
E Skipping 190 identical leading characters in diff
E Skipping 191 identical trailing characters in diff
E 1
@@ -157,7 +156,7 @@ get on the terminal - we are working on that):
failure_demo.py:58: AssertionError
_________________ TestSpecialisedExplanations.test_eq_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x2caf590>
self = <failure_demo.TestSpecialisedExplanations object at 0x1ba6bd0>
def test_eq_list(self):
> assert [0, 1, 2] == [0, 1, 3]
@@ -167,7 +166,7 @@ get on the terminal - we are working on that):
failure_demo.py:61: AssertionError
______________ TestSpecialisedExplanations.test_eq_list_long _______________
self = <failure_demo.TestSpecialisedExplanations object at 0x2c9e310>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b42910>
def test_eq_list_long(self):
a = [0]*100 + [1] + [3]*100
@@ -179,7 +178,7 @@ get on the terminal - we are working on that):
failure_demo.py:66: AssertionError
_________________ TestSpecialisedExplanations.test_eq_dict _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x2c9dc50>
self = <failure_demo.TestSpecialisedExplanations object at 0x1ba6f90>
def test_eq_dict(self):
> assert {'a': 0, 'b': 1} == {'a': 0, 'b': 2}
@@ -192,7 +191,7 @@ get on the terminal - we are working on that):
failure_demo.py:69: AssertionError
_________________ TestSpecialisedExplanations.test_eq_set __________________
self = <failure_demo.TestSpecialisedExplanations object at 0x2cafc10>
self = <failure_demo.TestSpecialisedExplanations object at 0x1b485d0>
def test_eq_set(self):
> assert set([0, 10, 11, 12]) == set([0, 20, 21])
@@ -208,7 +207,7 @@ get on the terminal - we are working on that):
failure_demo.py:72: AssertionError
_____________ TestSpecialisedExplanations.test_eq_longer_list ______________
self = <failure_demo.TestSpecialisedExplanations object at 0x2cba890>
self = <failure_demo.TestSpecialisedExplanations object at 0x1ba2850>
def test_eq_longer_list(self):
> assert [1,2] == [1,2,3]
@@ -218,13 +217,70 @@ get on the terminal - we are working on that):
failure_demo.py:75: AssertionError
_________________ TestSpecialisedExplanations.test_in_list _________________
self = <failure_demo.TestSpecialisedExplanations object at 0x2cba6d0>
self = <failure_demo.TestSpecialisedExplanations object at 0x1ba2f10>
def test_in_list(self):
> assert 1 in [0, 2, 3, 4, 5]
E assert 1 in [0, 2, 3, 4, 5]
failure_demo.py:78: AssertionError
__________ TestSpecialisedExplanations.test_not_in_text_multiline __________
self = <failure_demo.TestSpecialisedExplanations object at 0x1ba2990>
def test_not_in_text_multiline(self):
text = 'some multiline\ntext\nwhich\nincludes foo\nand a\ntail'
> assert 'foo' not in text
E assert 'foo' not in 'some multiline\ntext\nw...ncludes foo\nand a\ntail'
E 'foo' is contained here:
E some multiline
E text
E which
E includes foo
E ? +++
E and a
E tail
failure_demo.py:82: AssertionError
___________ TestSpecialisedExplanations.test_not_in_text_single ____________
self = <failure_demo.TestSpecialisedExplanations object at 0x1b42110>
def test_not_in_text_single(self):
text = 'single foo line'
> assert 'foo' not in text
E assert 'foo' not in 'single foo line'
E 'foo' is contained here:
E single foo line
E ? +++
failure_demo.py:86: AssertionError
_________ TestSpecialisedExplanations.test_not_in_text_single_long _________
self = <failure_demo.TestSpecialisedExplanations object at 0x1ba65d0>
def test_not_in_text_single_long(self):
text = 'head ' * 50 + 'foo ' + 'tail ' * 20
> assert 'foo' not in text
E assert 'foo' not in 'head head head head hea...ail tail tail tail tail '
E 'foo' is contained here:
E head head foo tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E ? +++
failure_demo.py:90: AssertionError
______ TestSpecialisedExplanations.test_not_in_text_single_long_term _______
self = <failure_demo.TestSpecialisedExplanations object at 0x1ba2c50>
def test_not_in_text_single_long_term(self):
text = 'head ' * 50 + 'f'*70 + 'tail ' * 20
> assert 'f'*70 not in text
E assert 'fffffffffff...ffffffffffff' not in 'head head he...l tail tail '
E 'ffffffffffffffffff...fffffffffffffffffff' is contained here:
E head head fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffftail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail tail
E ? ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
failure_demo.py:94: AssertionError
______________________________ test_attribute ______________________________
def test_attribute():
@@ -233,9 +289,9 @@ get on the terminal - we are working on that):
i = Foo()
> assert i.b == 2
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x2c9d750>.b
E + where 1 = <failure_demo.Foo object at 0x1ba2ad0>.b
failure_demo.py:85: AssertionError
failure_demo.py:101: AssertionError
_________________________ test_attribute_instance __________________________
def test_attribute_instance():
@@ -243,10 +299,10 @@ get on the terminal - we are working on that):
b = 1
> assert Foo().b == 2
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x2cafdd0>.b
E + where <failure_demo.Foo object at 0x2cafdd0> = <class 'failure_demo.Foo'>()
E + where 1 = <failure_demo.Foo object at 0x1ba2110>.b
E + where <failure_demo.Foo object at 0x1ba2110> = <class 'failure_demo.Foo'>()
failure_demo.py:91: AssertionError
failure_demo.py:107: AssertionError
__________________________ test_attribute_failure __________________________
def test_attribute_failure():
@@ -257,16 +313,16 @@ get on the terminal - we are working on that):
i = Foo()
> assert i.b == 2
failure_demo.py:100:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:116:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <failure_demo.Foo object at 0x2cba790>
self = <failure_demo.Foo object at 0x1ba2a90>
def _get_b(self):
> raise Exception('Failed to get attrib')
E Exception: Failed to get attrib
failure_demo.py:97: Exception
failure_demo.py:113: Exception
_________________________ test_attribute_multiple __________________________
def test_attribute_multiple():
@@ -276,57 +332,57 @@ get on the terminal - we are working on that):
b = 2
> assert Foo().b == Bar().b
E assert 1 == 2
E + where 1 = <failure_demo.Foo object at 0x2cba210>.b
E + where <failure_demo.Foo object at 0x2cba210> = <class 'failure_demo.Foo'>()
E + and 2 = <failure_demo.Bar object at 0x2cba850>.b
E + where <failure_demo.Bar object at 0x2cba850> = <class 'failure_demo.Bar'>()
E + where 1 = <failure_demo.Foo object at 0x1ba2950>.b
E + where <failure_demo.Foo object at 0x1ba2950> = <class 'failure_demo.Foo'>()
E + and 2 = <failure_demo.Bar object at 0x1ba2390>.b
E + where <failure_demo.Bar object at 0x1ba2390> = <class 'failure_demo.Bar'>()
failure_demo.py:108: AssertionError
failure_demo.py:124: AssertionError
__________________________ TestRaises.test_raises __________________________
self = <failure_demo.TestRaises instance at 0x2cc2560>
self = <failure_demo.TestRaises instance at 0x1bb3488>
def test_raises(self):
s = 'qwe'
> raises(TypeError, "int(s)")
failure_demo.py:117:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:133:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> int(s)
E ValueError: invalid literal for int() with base 10: 'qwe'
<0-codegen /home/hpk/p/pytest/_pytest/python.py:819>:1: ValueError
<0-codegen /home/hpk/p/pytest/_pytest/python.py:822>:1: ValueError
______________________ TestRaises.test_raises_doesnt _______________________
self = <failure_demo.TestRaises instance at 0x2cb6bd8>
self = <failure_demo.TestRaises instance at 0x1bb3098>
def test_raises_doesnt(self):
> raises(IOError, "int('3')")
E Failed: DID NOT RAISE
failure_demo.py:120: Failed
failure_demo.py:136: Failed
__________________________ TestRaises.test_raise ___________________________
self = <failure_demo.TestRaises instance at 0x2cc4830>
self = <failure_demo.TestRaises instance at 0x1ba7d40>
def test_raise(self):
> raise ValueError("demo error")
E ValueError: demo error
failure_demo.py:123: ValueError
failure_demo.py:139: ValueError
________________________ TestRaises.test_tupleerror ________________________
self = <failure_demo.TestRaises instance at 0x2cc5560>
self = <failure_demo.TestRaises instance at 0x1b5cc68>
def test_tupleerror(self):
> a,b = [1]
E ValueError: need more than 1 value to unpack
failure_demo.py:126: ValueError
failure_demo.py:142: ValueError
______ TestRaises.test_reinterpret_fails_with_print_for_the_fun_of_it ______
self = <failure_demo.TestRaises instance at 0x2cc6248>
self = <failure_demo.TestRaises instance at 0x1bb1488>
def test_reinterpret_fails_with_print_for_the_fun_of_it(self):
l = [1,2,3]
@@ -334,18 +390,18 @@ get on the terminal - we are working on that):
> a,b = l.pop()
E TypeError: 'int' object is not iterable
failure_demo.py:131: TypeError
failure_demo.py:147: TypeError
----------------------------- Captured stdout ------------------------------
l is [1, 2, 3]
________________________ TestRaises.test_some_error ________________________
self = <failure_demo.TestRaises instance at 0x2cc6f38>
self = <failure_demo.TestRaises instance at 0x1bb9128>
def test_some_error(self):
> if namenotexi:
E NameError: global name 'namenotexi' is not defined
failure_demo.py:134: NameError
failure_demo.py:150: NameError
____________________ test_dynamic_compile_shows_nicely _____________________
def test_dynamic_compile_shows_nicely():
@@ -357,17 +413,17 @@ get on the terminal - we are working on that):
py.std.sys.modules[name] = module
> module.foo()
failure_demo.py:149:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:165:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def foo():
> assert 1 == 0
E assert 1 == 0
<2-codegen 'abc-123' /home/hpk/p/pytest/doc/example/assertion/failure_demo.py:146>:2: AssertionError
<2-codegen 'abc-123' /home/hpk/p/pytest/doc/example/assertion/failure_demo.py:162>:2: AssertionError
____________________ TestMoreErrors.test_complex_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x2cc4050>
self = <failure_demo.TestMoreErrors instance at 0x1bb8f80>
def test_complex_error(self):
def f():
@@ -376,16 +432,16 @@ get on the terminal - we are working on that):
return 43
> somefunc(f(), g())
failure_demo.py:159:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:175:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
x = 44, y = 43
def somefunc(x,y):
> otherfunc(x,y)
failure_demo.py:8:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
failure_demo.py:8:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
a = 44, b = 43
@@ -396,39 +452,40 @@ get on the terminal - we are working on that):
failure_demo.py:5: AssertionError
___________________ TestMoreErrors.test_z1_unpack_error ____________________
self = <failure_demo.TestMoreErrors instance at 0x2cc7ab8>
self = <failure_demo.TestMoreErrors instance at 0x1bab200>
def test_z1_unpack_error(self):
l = []
> a,b = l
E ValueError: need more than 0 values to unpack
failure_demo.py:163: ValueError
failure_demo.py:179: ValueError
____________________ TestMoreErrors.test_z2_type_error _____________________
self = <failure_demo.TestMoreErrors instance at 0x2ccb8c0>
self = <failure_demo.TestMoreErrors instance at 0x1bb36c8>
def test_z2_type_error(self):
l = 3
> a,b = l
E TypeError: 'int' object is not iterable
failure_demo.py:167: TypeError
failure_demo.py:183: TypeError
______________________ TestMoreErrors.test_startswith ______________________
self = <failure_demo.TestMoreErrors instance at 0x2ccd5f0>
self = <failure_demo.TestMoreErrors instance at 0x1bbce60>
def test_startswith(self):
s = "123"
g = "456"
> assert s.startswith(g)
E assert <built-in method startswith of str object at 0x2c321b0>('456')
E + where <built-in method startswith of str object at 0x2c321b0> = '123'.startswith
E assert False
E + where False = <built-in method startswith of str object at 0x1ad6bd0>('456')
E + where <built-in method startswith of str object at 0x1ad6bd0> = '123'.startswith
failure_demo.py:172: AssertionError
failure_demo.py:188: AssertionError
__________________ TestMoreErrors.test_startswith_nested ___________________
self = <failure_demo.TestMoreErrors instance at 0x2ccbc20>
self = <failure_demo.TestMoreErrors instance at 0x1bbeb48>
def test_startswith_nested(self):
def f():
@@ -436,47 +493,49 @@ get on the terminal - we are working on that):
def g():
return "456"
> assert f().startswith(g())
E assert <built-in method startswith of str object at 0x2c321b0>('456')
E + where <built-in method startswith of str object at 0x2c321b0> = '123'.startswith
E + where '123' = <function f at 0x2c2d140>()
E + and '456' = <function g at 0x2cb00c8>()
E assert False
E + where False = <built-in method startswith of str object at 0x1ad6bd0>('456')
E + where <built-in method startswith of str object at 0x1ad6bd0> = '123'.startswith
E + where '123' = <function f at 0x1baade8>()
E + and '456' = <function g at 0x1baad70>()
failure_demo.py:179: AssertionError
failure_demo.py:195: AssertionError
_____________________ TestMoreErrors.test_global_func ______________________
self = <failure_demo.TestMoreErrors instance at 0x2cb69e0>
self = <failure_demo.TestMoreErrors instance at 0x1bbe098>
def test_global_func(self):
> assert isinstance(globf(42), float)
E assert isinstance(43, float)
E + where 43 = globf(42)
E assert False
E + where False = isinstance(43, float)
E + where 43 = globf(42)
failure_demo.py:182: AssertionError
failure_demo.py:198: AssertionError
_______________________ TestMoreErrors.test_instance _______________________
self = <failure_demo.TestMoreErrors instance at 0x2cc6440>
self = <failure_demo.TestMoreErrors instance at 0x1ba7bd8>
def test_instance(self):
self.x = 6*7
> assert self.x != 42
E assert 42 != 42
E + where 42 = 42
E + where 42 = <failure_demo.TestMoreErrors instance at 0x2cc6440>.x
E + where 42 = <failure_demo.TestMoreErrors instance at 0x1ba7bd8>.x
failure_demo.py:186: AssertionError
failure_demo.py:202: AssertionError
_______________________ TestMoreErrors.test_compare ________________________
self = <failure_demo.TestMoreErrors instance at 0x2dcc200>
self = <failure_demo.TestMoreErrors instance at 0x1bbca28>
def test_compare(self):
> assert globf(10) < 5
E assert 11 < 5
E + where 11 = globf(10)
failure_demo.py:189: AssertionError
failure_demo.py:205: AssertionError
_____________________ TestMoreErrors.test_try_finally ______________________
self = <failure_demo.TestMoreErrors instance at 0x2dce0e0>
self = <failure_demo.TestMoreErrors instance at 0x1bc0908>
def test_try_finally(self):
x = 1
@@ -484,5 +543,5 @@ get on the terminal - we are working on that):
> assert x == 0
E assert 1 == 0
failure_demo.py:194: AssertionError
======================== 35 failed in 0.19 seconds =========================
failure_demo.py:210: AssertionError
======================== 39 failed in 0.22 seconds =========================

View File

@@ -7,6 +7,8 @@ basic patterns and examples
pass different values to a test function, depending on command line options
----------------------------------------------------------------------------
.. regendoc:wipe
Suppose we want to write a test that depends on a command line option.
Here is a basic pattern how to achieve this::
@@ -32,7 +34,8 @@ provide the ``cmdopt`` through a :ref:`function argument <funcarg>` factory::
Let's run this without supplying our new command line option::
$ py.test -q
$ py.test -q test_sample.py
collecting ... collected 1 items
F
================================= FAILURES =================================
_______________________________ test_answer ________________________________
@@ -55,6 +58,7 @@ Let's run this without supplying our new command line option::
And now with supplying a command line option::
$ py.test -q --cmdopt=type2
collecting ... collected 1 items
F
================================= FAILURES =================================
_______________________________ test_answer ________________________________
@@ -83,6 +87,8 @@ on real-life examples.
generating parameters combinations, depending on command line
----------------------------------------------------------------------------
.. regendoc:wipe
Let's say we want to execute a test with different parameters
and the parameter range shall be determined by a command
line argument. Let's first write a simple computation test::
@@ -112,13 +118,15 @@ Now we add a test configuration like this::
This means that we only run 2 tests if we do not pass ``--all``::
$ py.test -q test_compute.py
collecting ... collected 2 items
..
2 passed in 0.01 seconds
We run only two computations, so we see two dots.
let's run the full monty::
$ py.test -q --all test_compute.py
$ py.test -q --all
collecting ... collected 5 items
....F
================================= FAILURES =================================
_____________________________ test_compute[4] ______________________________
@@ -135,12 +143,45 @@ let's run the full monty::
As expected when running the full range of ``param1`` values
we'll get an error on the last one.
dynamically adding command line options
--------------------------------------------------------------
.. regendoc:wipe
Through :confval:`addopts` you can statically add command line
options for your project. You can also dynamically modify
the command line arguments before they get processed::
# content of conftest.py
import sys
def pytest_cmdline_preparse(args):
if 'xdist' in sys.modules: # pytest-xdist plugin
import multiprocessing
num = max(multiprocessing.cpu_count() / 2, 1)
args[:] = ["-n", str(num)] + args
If you have the :ref:`xdist plugin <xdist>` installed
you will now always perform test runs using a number
of subprocesses close to your CPU. Running in an empty
directory with the above conftest.py::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
gw0 I / gw1 I / gw2 I / gw3 I
gw0 [0] / gw1 [0] / gw2 [0] / gw3 [0]
scheduling tests via LoadScheduling
============================= in 0.29 seconds =============================
.. _`retrieved by hooks as item keywords`:
control skipping of tests according to command line option
--------------------------------------------------------------
.. regendoc:wipe
Here is a ``conftest.py`` file adding a ``--runslow`` command
line option to control skipping of ``slow`` marked tests::
@@ -171,32 +212,33 @@ We can now write a test module like this::
and when running it will see a skipped "slow" test::
$ py.test test_module.py -rs # "-rs" means report details on the little 's'
$ py.test -rs # "-rs" means report details on the little 's'
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_module.py
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 2 items
test_module.py .s
========================= short test summary info ==========================
SKIP [1] /tmp/doc-exec-104/conftest.py:9: need --runslow option to run
SKIP [1] /tmp/doc-exec-171/conftest.py:9: need --runslow option to run
=================== 1 passed, 1 skipped in 0.02 seconds ====================
Or run it including the ``slow`` marked test::
$ py.test test_module.py --runslow
$ py.test --runslow
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_module.py
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 2 items
test_module.py ..
========================= 2 passed in 0.01 seconds =========================
writing well integrated assertion helpers
--------------------------------------------------
.. regendoc:wipe
If you have a test helper function called from a test you can
use the ``pytest.fail`` marker to fail a test with a certain message.
The test support function will not show up in the traceback if you
@@ -218,7 +260,8 @@ of tracebacks: the ``checkconfig`` function will not be shown
unless the ``--fulltrace`` command line option is specified.
Let's run our little function::
$ py.test -q
$ py.test -q test_checkconfig.py
collecting ... collected 1 items
F
================================= FAILURES =================================
______________________________ test_something ______________________________
@@ -230,16 +273,17 @@ Let's run our little function::
test_checkconfig.py:8: Failed
1 failed in 0.02 seconds
Detect if running from within a py.test run
--------------------------------------------------------------
.. regendoc:wipe
Usually it is a bad idea to make application code
behave differently if called from a test. But if you
absolutely must find out if your application code is
running from a test you can do something like this::
# content of conftest.py in your testing directory
# content of conftest.py
def pytest_configure(config):
import sys
@@ -259,3 +303,57 @@ accordingly in your application. It's also a good idea
to rather use your own application module rather than ``sys``
for handling flag.
Adding info to test report header
--------------------------------------------------------------
.. regendoc:wipe
It's easy to present extra information in a py.test run::
# content of conftest.py
def pytest_report_header(config):
return "project deps: mylib-1.1"
which will add the string to the test header accordingly::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
project deps: mylib-1.1
collecting ... collected 0 items
============================= in 0.00 seconds =============================
.. regendoc:wipe
You can also return a list of strings which will be considered as several
lines of information. You can of course also make the amount of reporting
information on e.g. the value of ``config.option.verbose`` so that
you present more information appropriately::
# content of conftest.py
def pytest_report_header(config):
if config.option.verbose > 0:
return ["info1: did you know that ...", "did you?"]
which will add info only when run with "--v"::
$ py.test -v
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 -- /home/hpk/venv/0/bin/python
info1: did you know that ...
did you?
collecting ... collected 0 items
============================= in 0.00 seconds =============================
and nothing when run plainly::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 0 items
============================= in 0.00 seconds =============================

View File

@@ -14,7 +14,7 @@ Dependency injection through function arguments
py.test allows to inject values into test functions through the *funcarg
mechanism*: For each argument name in a test function signature a factory is
looked up and called to create the value. The factory can live in the
same test class, test module, in a per-directory ``confest.py`` file or
same test class, test module, in a per-directory ``conftest.py`` file or
in an external plugin. It has full access to the requesting test
function, can register finalizers and invoke lifecycle-caching
helpers. As can be expected from a systematic dependency
@@ -45,8 +45,8 @@ Running the test looks like this::
$ py.test test_simplefactory.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_simplefactory.py
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 1 items
test_simplefactory.py F
@@ -150,8 +150,8 @@ Running this::
$ py.test test_example.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_example.py
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 10 items
test_example.py .........F
@@ -188,8 +188,8 @@ If you want to select only the run with the value ``7`` you could do::
$ py.test -v -k 7 test_example.py # or -k test_func[7]
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30 -- /home/hpk/venv/0/bin/python
test path 1: test_example.py
platform linux2 -- Python 2.6.6 -- pytest-2.0.1 -- /home/hpk/venv/0/bin/python
collecting ... collected 10 items
test_example.py:6: test_func[7] PASSED

View File

@@ -16,7 +16,10 @@ Installation options::
To check your installation has installed the correct version::
$ py.test --version
This is py.test version 2.0.0.dev30, imported from /home/hpk/p/pytest/pytest.py
This is py.test version 2.0.1, imported from /home/hpk/p/pytest/pytest.py
setuptools registered plugins:
pytest-xdist-1.6.dev2 at /home/hpk/p/pytest-xdist/xdist/plugin.pyc
pytest-pep8-0.7 at /home/hpk/p/pytest-pep8/pytest_pep8.pyc
If you get an error checkout :ref:`installation issues`.
@@ -38,19 +41,19 @@ That's it. You can execute the test function now::
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: /tmp/doc-exec-70
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 1 items
test_sample.py F
================================= FAILURES =================================
_______________________________ test_answer ________________________________
def test_answer():
> assert func(3) == 5
E assert 4 == 5
E + where 4 = func(3)
test_sample.py:5: AssertionError
========================= 1 failed in 0.02 seconds =========================
@@ -58,9 +61,10 @@ py.test found the ``test_answer`` function by following :ref:`standard test disc
.. note::
You can simply use the ``assert`` statement for coding expectations because
intermediate values will be presented to you. This is arguably easier than
learning all the `the JUnit legacy methods`_.
You can simply use the ``assert`` statement for asserting
expectations because intermediate values will be presented to you.
This is arguably easier than learning all the `the JUnit legacy
methods`_.
However, there remains one caveat to using simple asserts: your
assertion expression should better be side-effect free. Because
@@ -94,8 +98,9 @@ use the ``raises`` helper::
Running it with, this time in "quiet" reporting mode::
$ py.test -q test_sysexit.py
collecting ... collected 1 items
.
1 passed in 0.00 seconds
1 passed in 0.01 seconds
.. todo:: For further ways to assert exceptions see the `raises`
@@ -121,17 +126,19 @@ There is no need to subclass anything. We can simply
run the module by passing its filename::
$ py.test -q test_class.py
collecting ... collected 2 items
.F
================================= FAILURES =================================
____________________________ TestClass.test_two ____________________________
self = <test_class.TestClass instance at 0x288fc20>
self = <test_class.TestClass instance at 0x178b2d8>
def test_two(self):
x = "hello"
> assert hasattr(x, 'check')
E assert hasattr('hello', 'check')
E assert False
E + where False = hasattr('hello', 'check')
test_class.py:8: AssertionError
1 failed, 1 passed in 0.02 seconds
@@ -157,21 +164,22 @@ py.test will lookup and call a factory to create the resource
before performing the test function call. Let's just run it::
$ py.test -q test_tmpdir.py
collecting ... collected 1 items
F
================================= FAILURES =================================
_____________________________ test_needsfiles ______________________________
tmpdir = local('/tmp/pytest-122/test_needsfiles0')
tmpdir = local('/tmp/pytest-101/test_needsfiles0')
def test_needsfiles(tmpdir):
print tmpdir
> assert 0
E assert 0
test_tmpdir.py:3: AssertionError
----------------------------- Captured stdout ------------------------------
/tmp/pytest-122/test_needsfiles0
1 failed in 0.05 seconds
/tmp/pytest-101/test_needsfiles0
1 failed in 0.03 seconds
Before the test runs, a unique-per-test-invocation temporary directory
was created. More info at :ref:`tmpdir handling`.

View File

@@ -8,7 +8,7 @@ Welcome to ``py.test``!
- runs on Posix/Windows, Python 2.4-3.2, PyPy and Jython
- continously `tested on many Python interpreters <http://hudson.testrun.org/view/pytest/job/pytest/>`_
- used in :ref:`many projects <projects>`, ranging from 10 to 10000 tests
- used in :ref:`many projects and organisations <projects>`, ranging from 10 to 10000 tests
- has :ref:`comprehensive documentation <toc>`
- comes with :ref:`tested examples <examples>`
- supports :ref:`good integration practises <goodpractises>`
@@ -18,7 +18,8 @@ Welcome to ``py.test``!
- makes it :ref:`easy to get started <getstarted>`, refined :ref:`usage options <usage>`
- :ref:`assert with the assert statement`
- helpful :ref:`traceback and failing assertion reporting <tbreportdemo>`
- allows `print debugging <printdebugging>`_ and `generic output capturing <captures>`_
- allows :ref:`print debugging <printdebugging>` and :ref:`generic output
capturing <captures>`
- supports :pep:`8` compliant coding style in tests
- **supports functional testing and complex test setups**

View File

@@ -10,7 +10,6 @@
.. _pytest: http://pypi.python.org/pypi/pytest
.. _mercurial: http://mercurial.selenic.com/wiki/
.. _`setuptools`: http://pypi.python.org/pypi/setuptools
.. _`easy_install`:
.. _`distribute docs`:
.. _`distribute`: http://pypi.python.org/pypi/distribute

View File

@@ -88,20 +88,20 @@ You can use the ``-k`` command line option to select tests::
$ py.test -k webtest # running with the above defined examples yields
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: /tmp/doc-exec-74
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 4 items
test_mark.py ..
test_mark_classlevel.py ..
========================= 4 passed in 0.01 seconds =========================
========================= 4 passed in 0.02 seconds =========================
And you can also run all tests except the ones that match the keyword::
$ py.test -k-webtest
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: /tmp/doc-exec-74
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 4 items
===================== 4 tests deselected by '-webtest' =====================
======================= 4 deselected in 0.01 seconds =======================
@@ -110,8 +110,8 @@ Or to only select the class::
$ py.test -kTestClass
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: /tmp/doc-exec-74
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 4 items
test_mark_classlevel.py ..

View File

@@ -39,8 +39,8 @@ will be undone.
.. background check:
$ py.test
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: /tmp/doc-exec-75
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 0 items
============================= in 0.00 seconds =============================

View File

@@ -1,4 +1,4 @@
Writing, managing and understanding plugins
Working with plugins and conftest files
=============================================
.. _`local plugin`:
@@ -11,6 +11,7 @@ py.test implements all aspects of configuration, collection, running and reporti
.. _`pytest/plugin`: http://bitbucket.org/hpk42/pytest/src/tip/pytest/plugin/
.. _`conftest.py plugins`:
.. _`conftest.py`:
conftest.py: local per-directory plugins
--------------------------------------------------------------
@@ -53,6 +54,7 @@ earlier than further away ones.
conftest.py file.
.. _`external plugins`:
.. _`extplugins`:
Installing External Plugins / Searching
------------------------------------------------------
@@ -64,9 +66,26 @@ tool, for example::
pip uninstall pytest-NAME
If a plugin is installed, py.test automatically finds and integrates it,
there is no need to activate it. If you don't need a plugin anymore simply
de-install it. You can find a list of available plugins through a
`pytest- pypi.python.org search`_.
there is no need to activate it. Here is a list of known plugins:
* `pytest-capturelog <http://pypi.python.org/pypi/pytest-capturelog>`_:
to capture and assert about messages from the logging module
* `pytest-xdist <http://pypi.python.org/pypi/pytest-xdist>`_:
to distribute tests to CPUs and remote hosts, looponfailing mode,
see also :ref:`xdist`
* `pytest-cov <http://pypi.python.org/pypi/pytest-cov>`_:
coverage reporting, compatible with distributed testing
* `pytest-pep8 <http://pypi.python.org/pypi/pytest-pep8>`_:
a ``--pep8`` option to enable PEP8 compliancy checking.
* `oejskit <http://pypi.python.org/pypi/oejskit>`_:
a plugin to run javascript unittests in life browsers
(**version 0.8.9 not compatible with pytest-2.0**)
You may discover more plugins through a `pytest- pypi.python.org search`_.
.. _`available installable plugins`:
.. _`pytest- pypi.python.org search`: http://pypi.python.org/pypi?%3Aaction=search&term=pytest-&submit=search
@@ -170,12 +189,42 @@ the plugin manager like this:
If you want to look at the names of existing plugins, use
the ``--traceconfig`` option.
.. _`findpluginname`:
Finding out which plugins are active
----------------------------------------------------------------------------
If you want to find out which plugins are active in your
environment you can type::
py.test --traceconfig
and will get an extended test header which shows activated plugins
and their names. It will also print local plugins aka
:ref:`conftest.py <conftest>` files when they are loaded.
.. _`cmdunregister`:
deactivate / unregister a plugin by name
----------------------------------------------------------------------------
You can prevent plugins from loading or unregister them::
py.test -p no:NAME
This means that any subsequent try to activate/load the named
plugin will it already existing. See :ref:`findpluginname` for
how to obtain the name of a plugin.
.. _`builtin plugins`:
py.test default plugin reference
====================================
You can find the source code for the following plugins
in the `pytest repository <http://bitbucket.org/hpk42/pytest/>`_.
.. autosummary::
_pytest.assertion
@@ -195,7 +244,7 @@ py.test default plugin reference
_pytest.recwarn
_pytest.resultlog
_pytest.runner
_pytest.session
_pytest.main
_pytest.skipping
_pytest.terminal
_pytest.tmpdir
@@ -222,6 +271,7 @@ initialisation, command line and configuration hooks
.. currentmodule:: _pytest.hookspec
.. autofunction:: pytest_cmdline_preparse
.. autofunction:: pytest_cmdline_parse
.. autofunction:: pytest_namespace
.. autofunction:: pytest_addoption
@@ -288,14 +338,14 @@ Reference of important objects involved in hooks
.. autoclass:: _pytest.config.Parser
:members:
.. autoclass:: _pytest.session.Node(name, parent)
.. autoclass:: _pytest.main.Node(name, parent)
:members:
..
.. autoclass:: _pytest.session.File(fspath, parent)
.. autoclass:: _pytest.main.File(fspath, parent)
:members:
.. autoclass:: _pytest.session.Item(name, parent)
.. autoclass:: _pytest.main.Item(name, parent)
:members:
.. autoclass:: _pytest.python.Module(name, parent)
@@ -313,4 +363,3 @@ Reference of important objects involved in hooks
.. autoclass:: _pytest.runner.TestReport
:members:

View File

@@ -3,21 +3,27 @@
Project examples
==========================
Here are some examples of projects using py.test:
Here are some examples of projects using py.test (please send notes via :ref:`contact`):
* `PyPy <http://pypy.org>`_, Python with a JIT compiler, running over `16000 tests <http://test.pypy.org>`_
* the `MoinMoin <http://moinmo.in>`_ Wiki Engine
* `tox <http://codespeak.net/tox>`_, virtualenv/Hudson integration tool
* `PIDA <http://pida.co.uk>`_ framework for integrated development
* `PyPM <http://code.activestate.com/pypm/>`_ ActiveState's package manager
* `Fom <http://packages.python.org/Fom/>`_ a fluid object mapper for FluidDB
* `applib <https://github.com/ActiveState/applib>`_ cross-platform utilities
* `six <http://pypi.python.org/pypi/six/>`_ Python 2 and 3 compatibility utilities
* `pediapress <http://code.pediapress.com/wiki/wiki>`_ MediaWiki articles
* `mwlib <http://pypi.python.org/pypi/mwlib>`_ mediawiki parser and utility library
* `The Translate Toolkit <http://translate.sourceforge.net/wiki/toolkit/index>`_ for localization and conversion
* `execnet <http://codespeak.net/execnet>`_ rapid multi-Python deployment
* `pylib <http://pylib.org>`_ cross-platform path, IO, dynamic code library
* `Pacha <http://pacha.cafepais.com/>`_ configuration management in five minutes
* `bbfreeze <http://pypi.python.org/pypi/bbfreeze>`_ create standalone executables from Python scripts
* `pdb++ <http://bitbucket.org/antocuni/pdb>`_ a fancier version of PDB
* `py-s3fuse <http://code.google.com/p/py-s3fuse/>`_ Amazon S3 FUSE based filesystem
* `waskr <http://pacha.cafepais.com/>`_ WSGI Stats Middleware
* `guachi <http://code.google.com/p/guachi/>`_ global persistent configs for Python modules
* `Circuits <http://pypi.python.org/pypi/circuits>`_ lightweight Event Driven Framework
* `pygtk-helpers <http://bitbucket.org/aafshar/pygtkhelpers-main/>`_ easy interaction with PyGTK
* `QuantumCore <http://quantumcore.org/>`_ statusmessage and repoze openid plugin
@@ -31,15 +37,16 @@ Here are some examples of projects using py.test:
* `bu <http://packages.python.org/bu/>`_ a microscopic build system
* `katcp <https://bitbucket.org/hodgestar/katcp>`_ Telescope communication protocol over Twisted
* `kss plugin timer <http://pypi.python.org/pypi/kss.plugin.timer>`_
* many more ... (please send notes via the :ref:`contact`)
Some organisations using py.test
-----------------------------------
* `Square Kilometre Array <http://ska.ac.za/>`_
* `Square Kilometre Array, Cape Town <http://ska.ac.za/>`_
* `Tandberg <http://www.tandberg.com/>`_
* `Stups department of Heinrich Heine University <http://www.stups.uni-duesseldorf.de/projects.php>`_
* `Open End <http://openend.se>`_
* `Laboraratory of Bioinformatics <http://genesilico.pl/>`_
* `merlinux <http://merlinux.eu>`_
* `Shootq <http://web.shootq.com/>`_
* `Stups department of Heinrich Heine University Düsseldorf <http://www.stups.uni-duesseldorf.de/projects.php>`_
* `cellzome <http://www.cellzome.com/>`_
* `Open End, Gotenborg <http://www.openend.se>`_
* `Laboraratory of Bioinformatics, Warsaw <http://genesilico.pl/>`_
* `merlinux, Germany <http://merlinux.eu>`_
* many more ... (please be so kind to send a note via :ref:`contact`)

View File

@@ -121,14 +121,14 @@ Running it with the report-on-xfail option gives this output::
example $ py.test -rx xfail_demo.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev31
test path 1: xfail_demo.py
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 5 items
xfail_demo.py xxxxx
========================= short test summary info ==========================
XFAIL xfail_demo.py::test_hello
XFAIL xfail_demo.py::test_hello2
reason: [NOTRUN]
reason: [NOTRUN]
XFAIL xfail_demo.py::test_hello3
condition: hasattr(os, 'sep')
XFAIL xfail_demo.py::test_hello4

View File

@@ -28,15 +28,15 @@ Running this would result in a passed test except for the last
$ py.test test_tmpdir.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_tmpdir.py
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 1 items
test_tmpdir.py F
================================= FAILURES =================================
_____________________________ test_create_file _____________________________
tmpdir = local('/tmp/pytest-123/test_create_file0')
tmpdir = local('/tmp/pytest-102/test_create_file0')
def test_create_file(tmpdir):
p = tmpdir.mkdir("sub").join("hello.txt")
@@ -47,7 +47,7 @@ Running this would result in a passed test except for the last
E assert 0
test_tmpdir.py:7: AssertionError
========================= 1 failed in 0.02 seconds =========================
========================= 1 failed in 0.03 seconds =========================
.. _`base temporary directory`:
@@ -64,8 +64,8 @@ You can override the default temporary directory setting like this::
py.test --basetemp=mydir
When distributing tests on the local machine, ``py.test`` takes care to
configure a basetemp directory for the sub processes such that all
temporary data lands below below a single per-test run basetemp directory.
configure a basetemp directory for the sub processes such that all temporary
data lands below a single per-test run basetemp directory.
.. _`py.path.local`: http://pylib.org/path.html

View File

@@ -24,8 +24,8 @@ Running it yields::
$ py.test test_unittest.py
=========================== test session starts ============================
platform linux2 -- Python 2.6.5 -- pytest-2.0.0.dev30
test path 1: test_unittest.py
platform linux2 -- Python 2.6.6 -- pytest-2.0.1
collecting ... collected 1 items
test_unittest.py F
@@ -56,7 +56,7 @@ Running it yields::
/usr/lib/python2.6/unittest.py:350: AssertionError
----------------------------- Captured stdout ------------------------------
hello
========================= 1 failed in 0.02 seconds =========================
========================= 1 failed in 0.03 seconds =========================
.. _`unittest.py style`: http://docs.python.org/library/unittest.html

View File

@@ -148,7 +148,7 @@ environment this command will send each tests to all
platforms - and report back failures from all platforms
at once. The specifications strings use the `xspec syntax`_.
.. _`xspec syntax`: http://codespeak.net/execnet/trunk/basics.html#xspec
.. _`xspec syntax`: http://codespeak.net/execnet/basics.html#xspec
.. _`socketserver.py`: http://bitbucket.org/hpk42/execnet/raw/2af991418160/execnet/script/socketserver.py

View File

@@ -73,7 +73,7 @@ you can also use the following functions to implement fixtures::
function. Invoked for every test function in the module.
"""
def teardown_method(function):
def teardown_function(function):
""" teardown any state that was previously setup
with a setup_function call.
"""

View File

@@ -1,7 +1,7 @@
"""
unit and functional testing with Python.
"""
__version__ = '2.0.0'
__version__ = '2.0.1'
__all__ = ['main']
from _pytest.core import main, UsageError, _preloadplugins

View File

@@ -22,14 +22,14 @@ def main():
name='pytest',
description='py.test: simple powerful testing with Python',
long_description = long_description,
version='2.0.0',
version='2.0.1',
url='http://pytest.org',
license='MIT license',
platforms=['unix', 'linux', 'osx', 'cygwin', 'win32'],
author='holger krekel, Guido Wesdorp, Carl Friedrich Bolz, Armin Rigo, Maciej Fijalkowski & others',
author_email='holger at merlinux.eu',
entry_points= make_entry_points(),
install_requires=['py>=1.4.0a2'],
install_requires=['py>1.4.0'],
classifiers=['Development Status :: 5 - Production/Stable',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',

View File

@@ -27,7 +27,7 @@ class TestGeneralUsage:
def test_option(pytestconfig):
assert pytestconfig.option.xyz == "123"
""")
result = testdir.runpytest("-p", "xyz", "--xyz=123")
result = testdir.runpytest("-p", "pytest_xyz", "--xyz=123")
assert result.ret == 0
result.stdout.fnmatch_lines([
'*1 passed*',
@@ -218,6 +218,12 @@ class TestGeneralUsage:
assert res.ret
res.stderr.fnmatch_lines(["*ERROR*not found*"])
def test_docstring_on_hookspec(self):
from _pytest import hookspec
for name, value in vars(hookspec).items():
if name.startswith("pytest_"):
assert value.__doc__, "no docstring for %s" % name
class TestInvocationVariants:
def test_earlyinit(self, testdir):
p = testdir.makepyfile("""
@@ -340,6 +346,12 @@ class TestInvocationVariants:
result.stdout.fnmatch_lines([
"*2 passed*"
])
path.join('test_hello.py').remove()
result = testdir.runpytest("--pyargs", "tpkg.test_hello")
assert result.ret != 0
result.stderr.fnmatch_lines([
"*file*not*found*test_hello*",
])
def test_cmdline_python_package_not_exists(self, testdir):
result = testdir.runpytest("--pyargs", "tpkgwhatv")

View File

@@ -114,6 +114,10 @@ class TestAssert_reprcompare:
expl = callequal(A(), '')
assert not expl
def test_reprcompare_notin():
detail = plugin.pytest_assertrepr_compare('not in', 'foo', 'aaafoobbb')[1:]
assert detail == ["'foo' is contained here:", ' aaafoobbb', '? +++']
@needsnewassert
def test_pytest_assertrepr_compare_integration(testdir):
testdir.makepyfile("""
@@ -201,3 +205,14 @@ def test_traceback_failure(testdir):
"*test_traceback_failure.py:4: AssertionError"
])
@pytest.mark.skipif("sys.version_info < (2,5) or '__pypy__' in sys.builtin_module_names")
def test_warn_missing(testdir):
p1 = testdir.makepyfile("")
result = testdir.run(sys.executable, "-OO", "-m", "pytest", "-h")
result.stderr.fnmatch_lines([
"*WARNING*assertion*",
])
result = testdir.run(sys.executable, "-OO", "-m", "pytest", "--no-assert")
result.stderr.fnmatch_lines([
"*WARNING*assertion*",
])

View File

@@ -306,6 +306,60 @@ class TestLoggingInteraction:
# verify proper termination
assert "closed" not in s
def test_logging_initialized_in_test(self, testdir):
p = testdir.makepyfile("""
import sys
def test_something():
# pytest does not import logging
assert 'logging' not in sys.modules
import logging
logging.basicConfig()
logging.warn("hello432")
assert 0
""")
result = testdir.runpytest(p, "--traceconfig",
"-p", "no:capturelog")
assert result.ret != 0
result.stdout.fnmatch_lines([
"*hello432*",
])
assert 'operation on closed file' not in result.stderr.str()
def test_conftestlogging_is_shown(self, testdir):
testdir.makeconftest("""
import logging
logging.basicConfig()
logging.warn("hello435")
""")
# make sure that logging is still captured in tests
result = testdir.runpytest("-s", "-p", "no:capturelog")
assert result.ret == 0
result.stderr.fnmatch_lines([
"WARNING*hello435*",
])
assert 'operation on closed file' not in result.stderr.str()
def test_conftestlogging_and_test_logging(self, testdir):
testdir.makeconftest("""
import logging
logging.basicConfig()
""")
# make sure that logging is still captured in tests
p = testdir.makepyfile("""
def test_hello():
import logging
logging.warn("hello433")
assert 0
""")
result = testdir.runpytest(p, "-p", "no:capturelog")
assert result.ret != 0
result.stdout.fnmatch_lines([
"WARNING*hello433*",
])
assert 'something' not in result.stderr.str()
assert 'operation on closed file' not in result.stderr.str()
class TestCaptureFuncarg:
def test_std_functional(self, testdir):
reprec = testdir.inline_runsource("""

View File

@@ -1,6 +1,6 @@
import pytest, py
from _pytest.session import Session
from _pytest.main import Session
class TestCollector:
def test_collect_versus_item(self):
@@ -472,6 +472,21 @@ class TestSession:
item2b, = newcol.perform_collect([item.nodeid], genitems=False)
assert item2b == item2
def test_find_byid_without_instance_parents(self, testdir):
p = testdir.makepyfile("""
class TestClass:
def test_method(self):
pass
""")
arg = p.basename + ("::TestClass::test_method")
config = testdir.parseconfig(arg)
rcol = Session(config)
rcol.perform_collect()
items = rcol.items
assert len(items) == 1
item, = items
assert item.nodeid.endswith("TestClass::()::test_method")
class Test_getinitialnodes:
def test_global_file(self, testdir, tmpdir):
x = tmpdir.ensure("x.py")

View File

@@ -82,7 +82,6 @@ class TestConfigCmdlineParsing:
class TestConfigAPI:
def test_config_trace(self, testdir):
config = testdir.Config()
l = []
@@ -112,8 +111,15 @@ class TestConfigAPI:
verbose = config.getvalueorskip("verbose")
assert verbose == config.option.verbose
config.option.hello = None
pytest.raises(pytest.skip.Exception,
"config.getvalueorskip('hello')")
try:
config.getvalueorskip('hello')
except KeyboardInterrupt:
raise
except:
excinfo = py.code.ExceptionInfo()
frame = excinfo.traceback[-2].frame
assert frame.code.name == "getvalueorskip"
assert frame.eval("__tracebackhide__")
def test_config_overwrite(self, testdir):
o = testdir.tmpdir
@@ -219,12 +225,14 @@ def test_options_on_small_file_do_not_blow_up(testdir):
['--traceconfig'], ['-v'], ['-v', '-v']):
runfiletest(opts + [path])
def test_preparse_ordering(testdir, monkeypatch):
def test_preparse_ordering_with_setuptools(testdir, monkeypatch):
pkg_resources = py.test.importorskip("pkg_resources")
def my_iter(name):
assert name == "pytest11"
class EntryPoint:
name = "mytestplugin"
class dist:
pass
def load(self):
class PseudoPlugin:
x = 42
@@ -239,3 +247,27 @@ def test_preparse_ordering(testdir, monkeypatch):
plugin = config.pluginmanager.getplugin("mytestplugin")
assert plugin.x == 42
def test_plugin_preparse_prevents_setuptools_loading(testdir, monkeypatch):
pkg_resources = py.test.importorskip("pkg_resources")
def my_iter(name):
assert name == "pytest11"
class EntryPoint:
name = "mytestplugin"
def load(self):
assert 0, "should not arrive here"
return iter([EntryPoint()])
monkeypatch.setattr(pkg_resources, 'iter_entry_points', my_iter)
config = testdir.parseconfig("-p", "no:mytestplugin")
plugin = config.pluginmanager.getplugin("mytestplugin")
assert plugin == -1
def test_cmdline_processargs_simple(testdir):
testdir.makeconftest("""
def pytest_cmdline_preparse(args):
args.append("-h")
""")
result = testdir.runpytest()
result.stdout.fnmatch_lines([
"*pytest*",
"*-h*",
])

View File

@@ -170,21 +170,6 @@ def test_setinitial_confcut(testdir):
assert conftest.getconftestmodules(sub) == []
assert conftest.getconftestmodules(conf.dirpath()) == []
def test_conftest_samecontent_detection(testdir):
conf = testdir.makeconftest("x=3")
p = testdir.mkdir("x")
conf.copy(p.join("conftest.py"))
conftest = Conftest()
conftest.setinitial([p])
l = conftest.getconftestmodules(p)
assert len(l) == 1
assert l[0].__file__ == p.join("conftest.py")
p2 = p.mkdir("y")
conf.copy(p2.join("conftest.py"))
l = conftest.getconftestmodules(p2)
assert len(l) == 1
assert l[0].__file__ == p.join("conftest.py")
@pytest.mark.multi(name='test tests whatever .dotdir'.split())
def test_setinitial_conftest_subdirs(testdir, name):
sub = testdir.mkdir(name)

View File

@@ -1,5 +1,5 @@
import pytest, py, os
from _pytest.core import PluginManager, canonical_importname
from _pytest.core import PluginManager
from _pytest.core import MultiCall, HookRelay, varnames
@@ -15,30 +15,47 @@ class TestBootstrapping:
pluginmanager.consider_preparse(["xyz", "-p", "hello123"])
""")
def test_plugin_prevent_register(self):
pluginmanager = PluginManager()
pluginmanager.consider_preparse(["xyz", "-p", "no:abc"])
l1 = pluginmanager.getplugins()
pluginmanager.register(42, name="abc")
l2 = pluginmanager.getplugins()
assert len(l2) == len(l1)
def test_plugin_prevent_register_unregistered_alredy_registered(self):
pluginmanager = PluginManager()
pluginmanager.register(42, name="abc")
l1 = pluginmanager.getplugins()
assert 42 in l1
pluginmanager.consider_preparse(["xyz", "-p", "no:abc"])
l2 = pluginmanager.getplugins()
assert 42 not in l2
def test_plugin_skip(self, testdir, monkeypatch):
p = testdir.makepyfile(pytest_skipping1="""
p = testdir.makepyfile(skipping1="""
import pytest
pytest.skip("hello")
""")
p.copy(p.dirpath("pytest_skipping2.py"))
p.copy(p.dirpath("skipping2.py"))
monkeypatch.setenv("PYTEST_PLUGINS", "skipping2")
result = testdir.runpytest("-p", "skipping1", "--traceconfig")
assert result.ret == 0
result.stdout.fnmatch_lines([
"*hint*skipping2*hello*",
"*hint*skipping1*hello*",
"*hint*skipping2*hello*",
])
def test_consider_env_plugin_instantiation(self, testdir, monkeypatch):
pluginmanager = PluginManager()
testdir.syspathinsert()
testdir.makepyfile(pytest_xy123="#")
testdir.makepyfile(xy123="#")
monkeypatch.setitem(os.environ, 'PYTEST_PLUGINS', 'xy123')
l1 = len(pluginmanager.getplugins())
pluginmanager.consider_env()
l2 = len(pluginmanager.getplugins())
assert l2 == l1 + 1
assert pluginmanager.getplugin('pytest_xy123')
assert pluginmanager.getplugin('xy123')
pluginmanager.consider_env()
l3 = len(pluginmanager.getplugins())
assert l2 == l3
@@ -48,7 +65,8 @@ class TestBootstrapping:
def my_iter(name):
assert name == "pytest11"
class EntryPoint:
name = "mytestplugin"
name = "pytest_mytestplugin"
dist = None
def load(self):
class PseudoPlugin:
x = 42
@@ -60,8 +78,6 @@ class TestBootstrapping:
pluginmanager.consider_setuptools_entrypoints()
plugin = pluginmanager.getplugin("mytestplugin")
assert plugin.x == 42
plugin2 = pluginmanager.getplugin("pytest_mytestplugin")
assert plugin2 == plugin
def test_consider_setuptools_not_installed(self, monkeypatch):
monkeypatch.setitem(py.std.sys.modules, 'pkg_resources',
@@ -75,7 +91,7 @@ class TestBootstrapping:
p = testdir.makepyfile("""
import pytest
def test_hello(pytestconfig):
plugin = pytestconfig.pluginmanager.getplugin('x500')
plugin = pytestconfig.pluginmanager.getplugin('pytest_x500')
assert plugin is not None
""")
monkeypatch.setenv('PYTEST_PLUGINS', 'pytest_x500', prepend=",")
@@ -91,14 +107,14 @@ class TestBootstrapping:
reset = testdir.syspathinsert()
pluginname = "pytest_hello"
testdir.makepyfile(**{pluginname: ""})
pluginmanager.import_plugin("hello")
pluginmanager.import_plugin("pytest_hello")
len1 = len(pluginmanager.getplugins())
pluginmanager.import_plugin("pytest_hello")
len2 = len(pluginmanager.getplugins())
assert len1 == len2
plugin1 = pluginmanager.getplugin("pytest_hello")
assert plugin1.__name__.endswith('pytest_hello')
plugin2 = pluginmanager.getplugin("hello")
plugin2 = pluginmanager.getplugin("pytest_hello")
assert plugin2 is plugin1
def test_import_plugin_dotted_name(self, testdir):
@@ -116,13 +132,13 @@ class TestBootstrapping:
def test_consider_module(self, testdir):
pluginmanager = PluginManager()
testdir.syspathinsert()
testdir.makepyfile(pytest_plug1="#")
testdir.makepyfile(pytest_plug2="#")
testdir.makepyfile(pytest_p1="#")
testdir.makepyfile(pytest_p2="#")
mod = py.std.types.ModuleType("temp")
mod.pytest_plugins = ["pytest_plug1", "pytest_plug2"]
mod.pytest_plugins = ["pytest_p1", "pytest_p2"]
pluginmanager.consider_module(mod)
assert pluginmanager.getplugin("plug1").__name__ == "pytest_plug1"
assert pluginmanager.getplugin("plug2").__name__ == "pytest_plug2"
assert pluginmanager.getplugin("pytest_p1").__name__ == "pytest_p1"
assert pluginmanager.getplugin("pytest_p2").__name__ == "pytest_p2"
def test_consider_module_import_module(self, testdir):
mod = py.std.types.ModuleType("x")
@@ -198,8 +214,7 @@ class TestBootstrapping:
mod = py.std.types.ModuleType("pytest_xyz")
monkeypatch.setitem(py.std.sys.modules, 'pytest_xyz', mod)
pp = PluginManager()
pp.import_plugin('xyz')
assert pp.getplugin('xyz') == mod
pp.import_plugin('pytest_xyz')
assert pp.getplugin('pytest_xyz') == mod
assert pp.isregistered(mod)
@@ -217,9 +232,6 @@ class TestBootstrapping:
pass
excinfo = pytest.raises(Exception, "pp.register(hello())")
def test_canonical_importname(self):
for name in 'xyz', 'pytest_xyz', 'pytest_Xyz', 'Xyz':
impname = canonical_importname(name)
def test_notify_exception(self, capfd):
pp = PluginManager()
@@ -308,25 +320,6 @@ class TestPytestPluginInteractions:
res = config.hook.pytest_myhook(xyz=10)
assert res == [11]
def test_addhooks_docstring_error(self, testdir):
newhooks = testdir.makepyfile(newhooks="""
class A: # no pytest_ prefix
pass
def pytest_myhook(xyz):
pass
""")
conf = testdir.makeconftest("""
import sys ; sys.path.insert(0, '.')
import newhooks
def pytest_addhooks(pluginmanager):
pluginmanager.addhooks(newhooks)
""")
res = testdir.runpytest()
assert res.ret != 0
res.stderr.fnmatch_lines([
"*docstring*pytest_myhook*newhooks*"
])
def test_addhooks_nohooks(self, testdir):
conf = testdir.makeconftest("""
import sys
@@ -400,7 +393,7 @@ class TestPytestPluginInteractions:
assert len(l) == 0
config.pluginmanager.do_configure(config=config)
assert len(l) == 1
config.pluginmanager.register(A()) # this should lead to a configured() plugin
config.pluginmanager.register(A()) # leads to a configured() plugin
assert len(l) == 2
assert l[0] != l[1]

View File

@@ -48,19 +48,16 @@ class TestDoctests:
def test_doctest_unexpected_exception(self, testdir):
p = testdir.maketxtfile("""
>>> i = 0
>>> i = 1
>>> x
>>> 0 / i
2
""")
reprec = testdir.inline_run(p)
call = reprec.getcall("pytest_runtest_logreport")
assert call.report.failed
assert call.report.longrepr
# XXX
#testitem, = items
#excinfo = pytest.raises(Failed, "testitem.runtest()")
#repr = testitem.repr_failure(excinfo, ("", ""))
#assert repr.reprlocation
result = testdir.runpytest("--doctest-modules")
result.stdout.fnmatch_lines([
"*unexpected_exception*",
"*>>> i = 0*",
"*>>> 0 / i*",
"*UNEXPECTED*ZeroDivision*",
])
def test_doctestmodule(self, testdir):
p = testdir.makepyfile("""

View File

@@ -1,13 +1,18 @@
import py, pytest,os
from _pytest.helpconfig import collectattr
def test_version(testdir):
def test_version(testdir, pytestconfig):
result = testdir.runpytest("--version")
assert result.ret == 0
#p = py.path.local(py.__file__).dirpath()
result.stderr.fnmatch_lines([
'*py.test*%s*imported from*' % (pytest.__version__, )
])
if pytestconfig.pluginmanager._plugin_distinfo:
result.stderr.fnmatch_lines([
"*setuptools registered plugins:",
"*at*",
])
def test_help(testdir):
result = testdir.runpytest("--help")
@@ -51,3 +56,9 @@ def test_hookvalidation_optional(testdir):
result = testdir.runpytest()
assert result.ret == 0
def test_traceconfig(testdir):
result = testdir.runpytest("--traceconfig")
result.stdout.fnmatch_lines([
"*using*pytest*py*",
"*active plugins*",
])

View File

@@ -6,7 +6,9 @@ def setup_module(mod):
def test_nose_setup(testdir):
p = testdir.makepyfile("""
l = []
from nose.tools import with_setup
@with_setup(lambda: l.append(1), lambda: l.append(2))
def test_hello():
assert l == [1]
@@ -24,6 +26,8 @@ def test_nose_setup(testdir):
def test_nose_setup_func(testdir):
p = testdir.makepyfile("""
from nose.tools import with_setup
l = []
def my_setup():
@@ -34,16 +38,15 @@ def test_nose_setup_func(testdir):
b = 2
l.append(b)
@with_setup(my_setup, my_teardown)
def test_hello():
print l
print (l)
assert l == [1]
def test_world():
print l
print (l)
assert l == [1,2]
test_hello.setup = my_setup
test_hello.teardown = my_teardown
""")
result = testdir.runpytest(p, '-p', 'nose')
result.stdout.fnmatch_lines([
@@ -53,25 +56,25 @@ def test_nose_setup_func(testdir):
def test_nose_setup_func_failure(testdir):
p = testdir.makepyfile("""
l = []
from nose.tools import with_setup
l = []
my_setup = lambda x: 1
my_teardown = lambda x: 2
@with_setup(my_setup, my_teardown)
def test_hello():
print l
print (l)
assert l == [1]
def test_world():
print l
print (l)
assert l == [1,2]
test_hello.setup = my_setup
test_hello.teardown = my_teardown
""")
result = testdir.runpytest(p, '-p', 'nose')
result.stdout.fnmatch_lines([
"*TypeError: <lambda>() takes exactly 1 argument (0 given)*"
"*TypeError: <lambda>() takes exactly 1*0 given*"
])
@@ -83,11 +86,11 @@ def test_nose_setup_func_failure_2(testdir):
my_teardown = 2
def test_hello():
print l
print (l)
assert l == [1]
def test_world():
print l
print (l)
assert l == [1,2]
test_hello.setup = my_setup
@@ -118,11 +121,11 @@ def test_nose_setup_partial(testdir):
my_teardown_partial = partial(my_teardown, 2)
def test_hello():
print l
print (l)
assert l == [1]
def test_world():
print l
print (l)
assert l == [1,2]
test_hello.setup = my_setup_partial
@@ -173,21 +176,21 @@ def test_nose_test_generator_fixtures(testdir):
class TestClass(object):
def setup(self):
print "setup called in", self
print ("setup called in %s" % self)
self.called = ['setup']
def teardown(self):
print "teardown called in", self
print ("teardown called in %s" % self)
eq_(self.called, ['setup'])
self.called.append('teardown')
def test(self):
print "test called in", self
print ("test called in %s" % self)
for i in range(0, 5):
yield self.check, i
def check(self, i):
print "check called in", self
print ("check called in %s" % self)
expect = ['setup']
#for x in range(0, i):
# expect.append('setup')

View File

@@ -196,6 +196,49 @@ class TestGenerator:
assert passed == 4
assert not skipped and not failed
def test_setupstate_is_preserved_134(self, testdir):
# yield-based tests are messy wrt to setupstate because
# during collection they already invoke setup functions
# and then again when they are run. For now, we want to make sure
# that the old 1.3.4 behaviour is preserved such that all
# yielded functions all share the same "self" instance that
# has been used during collection.
o = testdir.makepyfile("""
setuplist = []
class TestClass:
def setup_method(self, func):
#print "setup_method", self, func
setuplist.append(self)
self.init = 42
def teardown_method(self, func):
self.init = None
def test_func1(self):
pass
def test_func2(self):
yield self.func2
yield self.func2
def func2(self):
assert self.init
def test_setuplist():
# once for test_func2 during collection
# once for test_func1 during test run
# once for test_func2 during test run
#print setuplist
assert len(setuplist) == 3, len(setuplist)
assert setuplist[0] == setuplist[2], setuplist
assert setuplist[1] != setuplist[2], setuplist
""")
reprec = testdir.inline_run(o, '-v')
passed, skipped, failed = reprec.countoutcomes()
assert passed == 4
assert not skipped and not failed
class TestFunction:
def test_getmodulecollector(self, testdir):
item = testdir.getitem("def test_func(): pass")
@@ -1237,3 +1280,49 @@ def test_customized_python_discovery(testdir):
result.stdout.fnmatch_lines([
"*2 passed*",
])
def test_collector_attributes(testdir):
testdir.makeconftest("""
import pytest
def pytest_pycollect_makeitem(collector):
assert collector.Function == pytest.Function
assert collector.Class == pytest.Class
assert collector.Instance == pytest.Instance
assert collector.Module == pytest.Module
""")
testdir.makepyfile("""
def test_hello():
pass
""")
result = testdir.runpytest()
result.stdout.fnmatch_lines([
"*1 passed*",
])
def test_customize_through_attributes(testdir):
testdir.makeconftest("""
import pytest
class MyFunction(pytest.Function):
pass
class MyInstance(pytest.Instance):
Function = MyFunction
class MyClass(pytest.Class):
Instance = MyInstance
def pytest_pycollect_makeitem(collector, name, obj):
if name.startswith("MyTestClass"):
return MyClass(name, parent=collector)
""")
testdir.makepyfile("""
class MyTestClass:
def test_hello(self):
pass
""")
result = testdir.runpytest("--collectonly")
result.stdout.fnmatch_lines([
"*MyClass*",
"*MyInstance*",
"*MyFunction*test_hello*",
])

View File

@@ -2,10 +2,10 @@ import py, pytest
import os
from _pytest.resultlog import generic_path, ResultLog, \
pytest_configure, pytest_unconfigure
from _pytest.session import Node, Item, FSCollector
from _pytest.main import Node, Item, FSCollector
def test_generic_path(testdir):
from _pytest.session import Session
from _pytest.main import Session
config = testdir.parseconfig()
session = Session(config)
p1 = Node('a', config=config, session=session)

View File

@@ -212,8 +212,8 @@ def test_plugin_specify(testdir):
#)
def test_plugin_already_exists(testdir):
config = testdir.parseconfig("-p", "session")
assert config.option.plugins == ['session']
config = testdir.parseconfig("-p", "terminal")
assert config.option.plugins == ['terminal']
config.pluginmanager.do_configure(config)
def test_exclude(testdir):

View File

@@ -308,6 +308,37 @@ class TestXFail:
"*1 xfailed*",
])
class TestXFailwithSetupTeardown:
def test_failing_setup_issue9(self, testdir):
testdir.makepyfile("""
import pytest
def setup_function(func):
assert 0
@pytest.mark.xfail
def test_func():
pass
""")
result = testdir.runpytest()
result.stdout.fnmatch_lines([
"*1 xfail*",
])
def test_failing_teardown_issue9(self, testdir):
testdir.makepyfile("""
import pytest
def teardown_function(func):
assert 0
@pytest.mark.xfail
def test_func():
pass
""")
result = testdir.runpytest()
result.stdout.fnmatch_lines([
"*1 xfail*",
])
class TestSkipif:
def test_skipif_conditional(self, testdir):

20
tox.ini
View File

@@ -2,8 +2,9 @@
distshare={homedir}/.tox/distshare
envlist=py26,py27,py31,py32,py27-xdist,py25,py24
indexserver=
default = http://pypi.testrun.org
pypi = http://pypi.python.org/simple
testrun = http://pypi.testrun.org
default = http://pypi.testrun.org
[testenv]
changedir=testing
@@ -15,7 +16,7 @@ deps=
[testenv:genscript]
changedir=.
commands= py.test --genscript=pytest1
deps=py>=1.4.0a2
deps=py>=1.4.0
[testenv:py27-xdist]
changedir=.
@@ -23,7 +24,7 @@ basepython=python2.7
deps=pytest-xdist
commands=
py.test -n3 -rfsxX \
--junitxml={envlogdir}/junit-{envname}.xml []
--ignore .tox --junitxml={envlogdir}/junit-{envname}.xml []
[testenv:trial]
changedir=.
@@ -48,7 +49,8 @@ commands=
make html
[testenv:py31]
deps=py>=1.4.0a2
deps=py>1.4.0
:pypi:nose>=1.0
[testenv:py31-xdist]
deps=pytest-xdist
@@ -57,22 +59,20 @@ commands=
--junitxml={envlogdir}/junit-{envname}.xml []
[testenv:py32]
deps=py>=1.4.0a2
[testenv:pypy]
basepython=pypy-c
deps=py>=1.4.0
[testenv:jython]
changedir=testing
commands=
{envpython} {envbindir}/py.test-jython --no-tools-on-path \
-rfsxX --junitxml={envlogdir}/junit-{envname}2.xml [acceptance_test.py plugin]
-rfsxX --junitxml={envlogdir}/junit-{envname}2.xml []
[pytest]
minversion=2.0
plugins=pytester
addopts= -rxf --pyargs --doctest-modules
#addopts= -rxf --pyargs --doctest-modules --ignore=.tox
rsyncdirs=tox.ini pytest.py _pytest testing
python_files=test_*.py *_test.py
python_classes=Test Acceptance
python_functions=test
pep8ignore = E401