Merge remote-tracking branch 'upstream/features' into ApaDoctor/disable-repeated-fixture

This commit is contained in:
Bruno Oliveira 2018-04-23 22:24:53 -03:00
commit 132fb61eba
222 changed files with 16500 additions and 7521 deletions

View File

@ -1,7 +1,4 @@
[run]
omit =
omit =
# standlonetemplate is read dynamically and tested by test_genscript
*standalonetemplate.py
# oldinterpret could be removed, as it is no longer used in py26+
*oldinterpret.py
vendored_packages

View File

@ -1,15 +1,14 @@
Thanks for submitting a PR, your contribution is really appreciated!
Here's a quick checklist that should be present in PRs:
Here's a quick checklist that should be present in PRs (you can delete this text from the final description, this is
just a guideline):
- [ ] Add a new news fragment into the changelog folder
* name it `$issue_id.$type` for example (588.bug)
* if you don't have an issue_id change it to the pr id after creating the pr
* ensure type is one of `removal`, `feature`, `bugfix`, `vendor`, `doc` or `trivial`
* Make sure to use full sentences with correct case and punctuation, for example: "Fix issue with non-ascii contents in doctest text files."
- [ ] Target: for `bugfix`, `vendor`, `doc` or `trivial` fixes, target `master`; for removals or features target `features`;
- [ ] Make sure to include reasonable tests for your change if necessary
- [ ] Create a new changelog file in the `changelog` folder, with a name like `<ISSUE NUMBER>.<TYPE>.rst`. See [changelog/README.rst](/changelog/README.rst) for details.
- [ ] Target the `master` branch for bug fixes, documentation updates and trivial changes.
- [ ] Target the `features` branch for new features and removals/deprecations.
- [ ] Include documentation when adding new features.
- [ ] Include new tests or update existing tests when applicable.
Unless your change is a trivial or a documentation fix (e.g., a typo or reword of a small section) please:
Unless your change is trivial or a small documentation fix (e.g., a typo or reword of a small section) please:
- [ ] Add yourself to `AUTHORS`;
- [ ] Add yourself to `AUTHORS` in alphabetical order;

1
.gitignore vendored
View File

@ -33,6 +33,7 @@ env/
3rdparty/
.tox
.cache
.pytest_cache
.coverage
.ropeproject
.idea

View File

@ -1,42 +1,58 @@
sudo: false
language: python
python:
- '3.5'
# command to install dependencies
install: "pip install -U tox"
# # command to run tests
- '3.6'
install:
- pip install --upgrade --pre tox
env:
matrix:
# coveralls is not listed in tox's envlist, but should run in travis
- TOXENV=coveralls
# note: please use "tox --listenvs" to populate the build matrix below
- TOXENV=linting
- TOXENV=py26
- TOXENV=py27
- TOXENV=py33
- TOXENV=py34
- TOXENV=py35
- TOXENV=pypy
- TOXENV=py36
- TOXENV=py27-pexpect
- TOXENV=py27-xdist
- TOXENV=py27-trial
- TOXENV=py35-pexpect
- TOXENV=py35-xdist
- TOXENV=py35-trial
- TOXENV=py27-numpy
- TOXENV=py27-pluggymaster
- TOXENV=py36-pexpect
- TOXENV=py36-xdist
- TOXENV=py36-trial
- TOXENV=py36-numpy
- TOXENV=py36-pluggymaster
- TOXENV=py27-nobyte
- TOXENV=doctesting
- TOXENV=freeze
- TOXENV=docs
matrix:
jobs:
include:
- env: TOXENV=py36
- env: TOXENV=pypy
python: 'pypy-5.4'
- env: TOXENV=py35
python: '3.5'
- env: TOXENV=py35-freeze
python: '3.5'
- env: TOXENV=py37
python: 'nightly'
- stage: deploy
python: '3.6'
- env: TOXENV=py37
python: 'nightly'
allow_failures:
- env: TOXENV=py37
python: 'nightly'
env:
install: pip install -U setuptools setuptools_scm
script: skip
deploy:
provider: pypi
user: nicoddemus
distributions: sdist bdist_wheel
skip_upload_docs: true
password:
secure: xanTgTUu6XDQVqB/0bwJQXoDMnU5tkwZc5koz6mBkkqZhKdNOi2CLoC1XhiSZ+ah24l4V1E0GAqY5kBBcy9d7NVe4WNg4tD095LsHw+CRU6/HCVIFfyk2IZ+FPAlguesCcUiJSXOrlBF+Wj68wEvLoK7EoRFbJeiZ/f91Ww1sbtDlqXABWGHrmhPJL5Wva7o7+wG7JwJowqdZg1pbQExsCc7b53w4v2RBu3D6TJaTAzHiVsW+nUSI67vKI/uf+cR/OixsTfy37wlHgSwihYmrYLFls3V0bSpahCim3bCgMaFZx8S8xrdgJ++PzBCof2HeflFKvW+VCkoYzGEG4NrTWJoNz6ni4red9GdvfjGH3YCjAKS56h9x58zp2E5rpsb/kVq5/45xzV+dq6JRuhQ1nJWjBC6fSKAc/bfwnuFK3EBxNLkvBssLHvsNjj5XG++cB8DdS9wVGUqjpoK4puaXUWFqy4q3S9F86HEsKNgExtieA9qNx+pCIZVs6JCXZNjr0I5eVNzqJIyggNgJG6RyravsU35t9Zd9doL5g4Y7UKmAGTn1Sz24HQ4sMQgXdm2SyD8gEK5je4tlhUvfGtDvMSlstq71kIn9nRpFnqB6MFlbYSEAZmo8dGbCquoUc++6Rum208wcVbrzzVtGlXB/Ow9AbFMYeAGA0+N/K1e59c=
on:
tags: true
repo: pytest-dev/pytest
script: tox --recreate

43
AUTHORS
View File

@ -3,19 +3,25 @@ merlinux GmbH, Germany, office at merlinux eu
Contributors include::
Aaron Coleman
Abdeali JK
Abhijeet Kasurde
Ahn Ki-Wook
Alan Velasco
Alexander Johnson
Alexei Kozlenok
Anatoly Bubenkoff
Anders Hovmöller
Andras Tim
Andreas Zeidler
Andrzej Ostrowski
Andy Freeland
Anthon van der Neut
Anthony Shaw
Anthony Sottile
Antony Lee
Armin Rigo
Aron Coyle
Aron Curzon
Aviv Palivoda
Barney Gale
@ -24,11 +30,14 @@ Benjamin Peterson
Bernard Pratz
Bob Ippolito
Brian Dorsey
Brian Maissy
Brian Okken
Brianna Laugher
Bruno Oliveira
Cal Leeming
Carl Friedrich Bolz
Carlos Jenkins
Ceridwen
Charles Cloud
Charnjit SiNGH (CCSJ)
Chris Lamb
@ -36,6 +45,7 @@ Christian Boelsen
Christian Theunert
Christian Tismer
Christopher Gilling
Cyrus Maden
Daniel Grana
Daniel Hahler
Daniel Nuri
@ -45,6 +55,7 @@ Dave Hunt
David Díaz-Barquero
David Mohr
David Vierra
Daw-Ran Liou
Denis Kirisov
Diego Russo
Dmitry Dygalo
@ -63,6 +74,7 @@ Feng Ma
Florian Bruhin
Floris Bruynooghe
Gabriel Reis
George Kussumoto
Georgy Dyuldin
Graham Horler
Greg Price
@ -70,33 +82,45 @@ Grig Gheorghiu
Grigorii Eremeev (budulianin)
Guido Wesdorp
Harald Armin Massa
Henk-Jaap Wagenaar
Hugo van Kemenade
Hui Wang (coldnight)
Ian Bicking
Ian Lesperance
Jaap Broekhuizen
Jan Balster
Janne Vanhala
Jason R. Coombs
Javier Domingo Cansino
Javier Romero
Jeff Rackauckas
Jeff Widman
John Eddie Ayson
John Towler
Jon Sonesen
Jonas Obrist
Jordan Guymon
Jordan Moldow
Jordan Speicher
Joshua Bronson
Jurko Gospodnetić
Justyna Janczyszyn
Kale Kundert
Katarzyna Jachim
Katerina Koukiou
Kevin Cox
Kodi B. Arfer
Kostis Anagnostopoulos
Lawrence Mitchell
Lee Kamentsky
Lev Maximov
Llandy Riveron Del Risco
Loic Esteve
Lukas Bednar
Luke Murphy
Maciek Fijalkowski
Maho
Maik Figura
Mandeep Bhutani
Manuel Krebber
Marc Schlaich
@ -104,6 +128,7 @@ Marcin Bachry
Mark Abramowitz
Markus Unterwaditzer
Martijn Faassen
Martin Altmayer
Martin K. Scherer
Martin Prusse
Mathieu Clabaut
@ -111,28 +136,34 @@ Matt Bachmann
Matt Duck
Matt Williams
Matthias Hafner
Maxim Filipenko
mbyt
Michael Aquilina
Michael Birtwell
Michael Droettboom
Michael Seifert
Michal Wajszczuk
Mihai Capotă
Mike Lundy
Nathaniel Waisbrot
Ned Batchelder
Neven Mundar
Nicolas Delaby
Oleg Pidsadnyi
Oleg Sushchenko
Oliver Bestwalter
Omar Kohl
Omer Hadari
Patrick Hayes
Paweł Adamczak
Pedro Algarvio
Pieter Mulder
Piotr Banaszkiewicz
Punyashloka Biswal
Quentin Pradet
Ralf Schmitt
Ran Benita
Raphael Castaneda
Raphael Pierzina
Raquel Alegre
Ravi Chandra
@ -143,25 +174,37 @@ Ronny Pfannschmidt
Ross Lawley
Russel Winder
Ryan Wooden
Samuel Dion-Girardeau
Samuele Pedroni
Segev Finer
Simon Gomizelj
Skylar Downes
Srinivas Reddy Thatiparthy
Stefan Farmbauer
Stefan Zimmermann
Stefano Taschini
Steffen Allner
Stephan Obermann
Tarcisio Fischer
Tareq Alayan
Ted Xiao
Thomas Grainger
Thomas Hisch
Tim Strazny
Tom Dalton
Tom Viner
Trevor Bekolay
Tyler Goodlet
Tzu-ping Chung
Vasily Kuznetsov
Victor Uriarte
Vidar T. Fauske
Vitaly Lashmanov
Vlad Dragos
William Lee
Wouter van Ackooy
Xuan Luong
Xuecong Liao
Zoltán Máté
Roland Puntaier
Allan Feldman

File diff suppressed because it is too large Load Diff

View File

@ -34,13 +34,13 @@ If you are reporting a bug, please include:
* Your operating system name and version.
* Any details about your local setup that might be helpful in troubleshooting,
specifically Python interpreter version,
installed libraries and pytest version.
specifically the Python interpreter version, installed libraries, and pytest
version.
* Detailed steps to reproduce the bug.
If you can write a demonstration test that currently fails but should pass (xfail),
that is a very useful commit to make as well, even if you can't find how
to fix the bug yet.
If you can write a demonstration test that currently fails but should pass
(xfail), that is a very useful commit to make as well, even if you cannot
fix the bug itself.
.. _fixbugs:
@ -49,7 +49,7 @@ Fix bugs
--------
Look through the GitHub issues for bugs. Here is a filter you can use:
https://github.com/pytest-dev/pytest/labels/bug
https://github.com/pytest-dev/pytest/labels/type%3A%20bug
:ref:`Talk <contact>` to developers to find out how you can fix specific bugs.
@ -120,7 +120,7 @@ the following:
- PyPI presence with a ``setup.py`` that contains a license, ``pytest-``
prefixed name, version number, authors, short and long description.
- a ``tox.ini`` for running tests using `tox <http://tox.testrun.org>`_.
- a ``tox.ini`` for running tests using `tox <https://tox.readthedocs.io>`_.
- a ``README.txt`` describing how to use the plugin and on which
platforms it runs.
@ -158,19 +158,41 @@ As stated, the objective is to share maintenance and avoid "plugin-abandon".
.. _`pull requests`:
.. _pull-requests:
Preparing Pull Requests on GitHub
---------------------------------
Preparing Pull Requests
-----------------------
.. note::
What is a "pull request"? It informs project's core developers about the
changes you want to review and merge. Pull requests are stored on
`GitHub servers <https://github.com/pytest-dev/pytest/pulls>`_.
Once you send a pull request, we can discuss its potential modifications and
even add more commits to it later on.
Short version
~~~~~~~~~~~~~
There's an excellent tutorial on how Pull Requests work in the
`GitHub Help Center <https://help.github.com/articles/using-pull-requests/>`_,
but here is a simple overview:
#. Fork the repository;
#. Target ``master`` for bugfixes and doc changes;
#. Target ``features`` for new features or functionality changes.
#. Follow **PEP-8**. There's a ``tox`` command to help fixing it: ``tox -e fix-lint``.
#. Tests are run using ``tox``::
tox -e linting,py27,py36
The test environments above are usually enough to cover most cases locally.
#. Write a ``changelog`` entry: ``changelog/2574.bugfix``, use issue id number
and one of ``bugfix``, ``removal``, ``feature``, ``vendor``, ``doc`` or
``trivial`` for the issue type.
#. Unless your change is a trivial or a documentation fix (e.g., a typo or reword of a small section) please
add yourself to the ``AUTHORS`` file, in alphabetical order;
Long version
~~~~~~~~~~~~
What is a "pull request"? It informs the project's core developers about the
changes you want to review and merge. Pull requests are stored on
`GitHub servers <https://github.com/pytest-dev/pytest/pulls>`_.
Once you send a pull request, we can discuss its potential modifications and
even add more commits to it later on. There's an excellent tutorial on how Pull
Requests work in the
`GitHub Help Center <https://help.github.com/articles/using-pull-requests/>`_.
Here is a simple overview, with pytest-specific bits:
#. Fork the
`pytest GitHub repository <https://github.com/pytest-dev/pytest>`__. It's
@ -214,12 +236,18 @@ but here is a simple overview:
This command will run tests via the "tox" tool against Python 2.7 and 3.6
and also perform "lint" coding-style checks.
#. You can now edit your local working copy.
#. You can now edit your local working copy. Please follow PEP-8.
You can now make the changes you want and run the tests again as necessary.
To run tests on Python 2.7 and pass options to pytest (e.g. enter pdb on
failure) to pytest you can do::
If you have too much linting errors, try running::
$ tox -e fix-lint
To fix pep8 related errors.
You can pass different options to ``tox``. For example, to run tests on Python 2.7 and pass options to pytest
(e.g. enter pdb on failure) to pytest you can do::
$ tox -e py27 -- --pdb
@ -232,9 +260,11 @@ but here is a simple overview:
$ git commit -a -m "<commit message>"
$ git push -u
Make sure you add a message to ``CHANGELOG.rst`` and add yourself to
``AUTHORS``. If you are unsure about either of these steps, submit your
pull request and we'll help you fix it up.
#. Create a new changelog entry in ``changelog``. The file should be named ``<issueid>.<type>``,
where *issueid* is the number of the issue related to the change and *type* is one of
``bugfix``, ``removal``, ``feature``, ``vendor``, ``doc`` or ``trivial``.
#. Add yourself to ``AUTHORS`` file if not there yet, in alphabetical order.
#. Finally, submit a pull request through the GitHub website using this data::
@ -246,3 +276,15 @@ but here is a simple overview:
base: features # if it's a feature
Joining the Development Team
----------------------------
Anyone who has successfully seen through a pull request which did not
require any extra work from the development team to merge will
themselves gain commit access if they so wish (if we forget to ask please send a friendly
reminder). This does not mean your workflow to contribute changes,
everyone goes through the same pull-request-and-review process and
no-one merges their own pull requests unless already approved. It does however mean you can
participate in the development process more fully since you can merge
pull requests from other contributors yourself after having reviewed
them.

View File

@ -1,5 +1,9 @@
How to release pytest
--------------------------------------------
Release Procedure
-----------------
Our current policy for releasing is to aim for a bugfix every few weeks and a minor release every 2-3 months. The idea
is to get fixes and new features out instead of trying to cram a ton of features into a release and by consequence
taking a lot of time to make a new one.
.. important::
@ -8,7 +12,7 @@ How to release pytest
#. Install development dependencies in a virtual environment with::
pip3 install -r tasks/requirements.txt
pip3 install -U -r tasks/requirements.txt
#. Create a branch ``release-X.Y.Z`` with the version for the release.
@ -18,44 +22,28 @@ How to release pytest
Ensure your are in a clean work tree.
#. Generate docs, changelog, announcements and upload a package to
your ``devpi`` staging server::
#. Generate docs, changelog, announcements and a **local** tag::
invoke generate.pre_release <VERSION> <DEVPI USER> --password <DEVPI PASSWORD>
If ``--password`` is not given, it is assumed the user is already logged in ``devpi``.
If you don't have an account, please ask for one.
invoke generate.pre-release <VERSION>
#. Open a PR for this branch targeting ``master``.
#. Test the package
#. After all tests pass and the PR has been approved, publish to PyPI by pushing the tag::
* **Manual method**
git push git@github.com:pytest-dev/pytest.git <VERSION>
Run from multiple machines::
Wait for the deploy to complete, then make sure it is `available on PyPI <https://pypi.org/project/pytest>`_.
devpi use https://devpi.net/USER/dev
devpi test pytest==VERSION
#. Send an email announcement with the contents from::
Check that tests pass for relevant combinations with::
doc/en/announce/release-<VERSION>.rst
devpi list pytest
To the following mailing lists:
* **CI servers**
* pytest-dev@python.org (all releases)
* python-announce-list@python.org (all releases)
* testing-in-python@lists.idyll.org (only major/minor releases)
Configure a repository as per-instructions on
devpi-cloud-test_ to test the package on Travis_ and AppVeyor_.
All test environments should pass.
And announce it on `Twitter <https://twitter.com/>`_ with the ``#pytest`` hashtag.
#. Publish to PyPI::
invoke generate.publish_release <VERSION> <DEVPI USER> <PYPI_NAME>
where PYPI_NAME is the name of pypi.python.org as configured in your ``~/.pypirc``
file `for devpi <http://doc.devpi.net/latest/quickstart-releaseprocess.html?highlight=pypirc#devpi-push-releasing-to-an-external-index>`_.
#. After a minor/major release, merge ``features`` into ``master`` and push (or open a PR).
.. _devpi-cloud-test: https://github.com/obestwalter/devpi-cloud-test
.. _AppVeyor: https://www.appveyor.com/
.. _Travis: https://travis-ci.org
#. After a minor/major release, merge ``release-X.Y.Z`` into ``master`` and push (or open a PR).

View File

@ -23,6 +23,9 @@
.. image:: https://ci.appveyor.com/api/projects/status/mrgbjaua7t33pg6b?svg=true
:target: https://ci.appveyor.com/project/pytestbot/pytest
.. image:: https://www.codetriage.com/pytest-dev/pytest/badges/users.svg
:target: https://www.codetriage.com/pytest-dev/pytest
The ``pytest`` framework makes it easy to write small tests, yet
scales to support complex functional testing for applications and libraries.
@ -76,9 +79,9 @@ Features
- Can run `unittest <http://docs.pytest.org/en/latest/unittest.html>`_ (or trial),
`nose <http://docs.pytest.org/en/latest/nose.html>`_ test suites out of the box;
- Python2.6+, Python3.3+, PyPy-2.3, Jython-2.5 (untested);
- Python 2.7, Python 3.4+, PyPy 2.3, Jython 2.5 (untested);
- Rich plugin architecture, with over 150+ `external plugins <http://docs.pytest.org/en/latest/plugins.html#installing-external-plugins-searching>`_ and thriving community;
- Rich plugin architecture, with over 315+ `external plugins <http://plugincompat.herokuapp.com>`_ and thriving community;
Documentation

View File

@ -4,9 +4,6 @@ needs argcomplete>=0.5.6 for python 3.2/3.3 (older versions fail
to find the magic string, so _ARGCOMPLETE env. var is never set, and
this does not need special code.
argcomplete does not support python 2.5 (although the changes for that
are minor).
Function try_argcomplete(parser) should be called directly before
the call to ArgumentParser.parse_args().
@ -62,21 +59,24 @@ import sys
import os
from glob import glob
class FastFilesCompleter:
class FastFilesCompleter(object):
'Fast file completer class'
def __init__(self, directories=True):
self.directories = directories
def __call__(self, prefix, **kwargs):
"""only called on non option completions"""
if os.path.sep in prefix[1:]: #
if os.path.sep in prefix[1:]:
prefix_dir = len(os.path.dirname(prefix) + os.path.sep)
else:
prefix_dir = 0
completion = []
globbed = []
if '*' not in prefix and '?' not in prefix:
if prefix[-1] == os.path.sep: # we are on unix, otherwise no bash
# we are on unix, otherwise no bash
if not prefix or prefix[-1] == os.path.sep:
globbed.extend(glob(prefix + '.*'))
prefix += '*'
globbed.extend(glob(prefix))
@ -96,7 +96,8 @@ if os.environ.get('_ARGCOMPLETE'):
filescompleter = FastFilesCompleter()
def try_argcomplete(parser):
argcomplete.autocomplete(parser)
argcomplete.autocomplete(parser, always_complete_options=False)
else:
def try_argcomplete(parser): pass
def try_argcomplete(parser):
pass
filescompleter = None

View File

@ -5,6 +5,7 @@
from __future__ import absolute_import, division, print_function
import types
def format_exception_only(etype, value):
"""Format the exception part of a traceback.
@ -30,7 +31,7 @@ def format_exception_only(etype, value):
# would throw another exception and mask the original problem.
if (isinstance(etype, BaseException) or
isinstance(etype, types.InstanceType) or
etype is None or type(etype) is str):
etype is None or type(etype) is str):
return [_format_final_exc_line(etype, value)]
stype = etype.__name__
@ -62,6 +63,7 @@ def format_exception_only(etype, value):
lines.append(_format_final_exc_line(stype, value))
return lines
def _format_final_exc_line(etype, value):
"""Return a list of a single line -- normal case for format_exception_only"""
valuestr = _some_str(value)
@ -71,6 +73,7 @@ def _format_final_exc_line(etype, value):
line = "%s: %s\n" % (etype, valuestr)
return line
def _some_str(value):
try:
return unicode(value)

View File

@ -1,6 +1,10 @@
from __future__ import absolute_import, division, print_function
import inspect
import sys
import traceback
from inspect import CO_VARARGS, CO_VARKEYWORDS
import attr
import re
from weakref import ref
from _pytest.compat import _PY2, _PY3, PY35, safe_str
@ -8,8 +12,6 @@ from _pytest.compat import _PY2, _PY3, PY35, safe_str
import py
builtin_repr = repr
reprlib = py.builtin._tryimport('repr', 'reprlib')
if _PY3:
from traceback import format_exception_only
else:
@ -18,6 +20,7 @@ else:
class Code(object):
""" wrapper around Python code objects """
def __init__(self, rawcode):
if not hasattr(rawcode, "co_filename"):
rawcode = getrawcode(rawcode)
@ -26,7 +29,7 @@ class Code(object):
self.firstlineno = rawcode.co_firstlineno - 1
self.name = rawcode.co_name
except AttributeError:
raise TypeError("not a code object: %r" %(rawcode,))
raise TypeError("not a code object: %r" % (rawcode,))
self.raw = rawcode
def __eq__(self, other):
@ -82,6 +85,7 @@ class Code(object):
argcount += raw.co_flags & CO_VARKEYWORDS
return raw.co_varnames[:argcount]
class Frame(object):
"""Wrapper around a Python frame holding f_locals and f_globals
in which expressions can be evaluated."""
@ -119,7 +123,7 @@ class Frame(object):
"""
f_locals = self.f_locals.copy()
f_locals.update(vars)
py.builtin.exec_(code, self.f_globals, f_locals )
py.builtin.exec_(code, self.f_globals, f_locals)
def repr(self, object):
""" return a 'safe' (non-recursive, one-line) string repr for 'object'
@ -143,6 +147,7 @@ class Frame(object):
pass # this can occur when using Psyco
return retval
class TracebackEntry(object):
""" a single entry in a traceback """
@ -168,7 +173,7 @@ class TracebackEntry(object):
return self.lineno - self.frame.code.firstlineno
def __repr__(self):
return "<TracebackEntry %s:%d>" %(self.frame.code.path, self.lineno+1)
return "<TracebackEntry %s:%d>" % (self.frame.code.path, self.lineno + 1)
@property
def statement(self):
@ -232,7 +237,7 @@ class TracebackEntry(object):
except KeyError:
return False
if py.builtin.callable(tbh):
if callable(tbh):
return tbh(None if self._excinfo is None else self._excinfo())
else:
return tbh
@ -247,19 +252,21 @@ class TracebackEntry(object):
line = str(self.statement).lstrip()
except KeyboardInterrupt:
raise
except:
except: # noqa
line = "???"
return " File %r:%d in %s\n %s\n" %(fn, self.lineno+1, name, line)
return " File %r:%d in %s\n %s\n" % (fn, self.lineno + 1, name, line)
def name(self):
return self.frame.code.raw.co_name
name = property(name, None, None, "co_name of underlaying code")
class Traceback(list):
""" Traceback objects encapsulate and offer higher level
access to Traceback entries.
"""
Entry = TracebackEntry
def __init__(self, tb, excinfo=None):
""" initialize from given python traceback object and ExceptionInfo """
self._excinfo = excinfo
@ -289,7 +296,7 @@ class Traceback(list):
(excludepath is None or not hasattr(codepath, 'relto') or
not codepath.relto(excludepath)) and
(lineno is None or x.lineno == lineno) and
(firstlineno is None or x.frame.code.firstlineno == firstlineno)):
(firstlineno is None or x.frame.code.firstlineno == firstlineno)):
return Traceback(x._rawentry, self._excinfo)
return self
@ -315,7 +322,7 @@ class Traceback(list):
""" return last non-hidden traceback entry that lead
to the exception of a traceback.
"""
for i in range(-1, -len(self)-1, -1):
for i in range(-1, -len(self) - 1, -1):
entry = self[i]
if not entry.ishidden():
return entry
@ -330,25 +337,26 @@ class Traceback(list):
# id for the code.raw is needed to work around
# the strange metaprogramming in the decorator lib from pypi
# which generates code objects that have hash/value equality
#XXX needs a test
# XXX needs a test
key = entry.frame.code.path, id(entry.frame.code.raw), entry.lineno
#print "checking for recursion at", key
l = cache.setdefault(key, [])
if l:
# print "checking for recursion at", key
values = cache.setdefault(key, [])
if values:
f = entry.frame
loc = f.f_locals
for otherloc in l:
for otherloc in values:
if f.is_true(f.eval(co_equal,
__recursioncache_locals_1=loc,
__recursioncache_locals_2=otherloc)):
__recursioncache_locals_1=loc,
__recursioncache_locals_2=otherloc)):
return i
l.append(entry.frame.f_locals)
values.append(entry.frame.f_locals)
return None
co_equal = compile('__recursioncache_locals_1 == __recursioncache_locals_2',
'?', 'eval')
class ExceptionInfo(object):
""" wraps sys.exc_info() objects and offers
help for navigating the traceback.
@ -405,10 +413,10 @@ class ExceptionInfo(object):
exconly = self.exconly(tryshort=True)
entry = self.traceback.getcrashentry()
path, lineno = entry.frame.code.raw.co_filename, entry.lineno
return ReprFileLocation(path, lineno+1, exconly)
return ReprFileLocation(path, lineno + 1, exconly)
def getrepr(self, showlocals=False, style="long",
abspath=False, tbfilter=True, funcargs=False):
abspath=False, tbfilter=True, funcargs=False):
""" return str()able representation of this exception info.
showlocals: show locals per traceback entry
style: long|short|no|native traceback style
@ -418,14 +426,14 @@ class ExceptionInfo(object):
"""
if style == 'native':
return ReprExceptionInfo(ReprTracebackNative(
py.std.traceback.format_exception(
traceback.format_exception(
self.type,
self.value,
self.traceback[0]._rawentry,
)), self._getreprcrash())
fmt = FormattedExcinfo(showlocals=showlocals, style=style,
abspath=abspath, tbfilter=tbfilter, funcargs=funcargs)
abspath=abspath, tbfilter=tbfilter, funcargs=funcargs)
return fmt.repr_excinfo(self)
def __str__(self):
@ -452,32 +460,32 @@ class ExceptionInfo(object):
return True
@attr.s
class FormattedExcinfo(object):
""" presenting information about failing Functions and Generators. """
# for traceback entries
flow_marker = ">"
fail_marker = "E"
def __init__(self, showlocals=False, style="long", abspath=True, tbfilter=True, funcargs=False):
self.showlocals = showlocals
self.style = style
self.tbfilter = tbfilter
self.funcargs = funcargs
self.abspath = abspath
self.astcache = {}
showlocals = attr.ib(default=False)
style = attr.ib(default="long")
abspath = attr.ib(default=True)
tbfilter = attr.ib(default=True)
funcargs = attr.ib(default=False)
astcache = attr.ib(default=attr.Factory(dict), init=False, repr=False)
def _getindent(self, source):
# figure out indent for given source
try:
s = str(source.getstatement(len(source)-1))
s = str(source.getstatement(len(source) - 1))
except KeyboardInterrupt:
raise
except:
except: # noqa
try:
s = str(source[-1])
except KeyboardInterrupt:
raise
except:
except: # noqa
return 0
return 4 + (len(s) - len(s.lstrip()))
@ -513,7 +521,7 @@ class FormattedExcinfo(object):
for line in source.lines[:line_index]:
lines.append(space_prefix + line)
lines.append(self.flow_marker + " " + source.lines[line_index])
for line in source.lines[line_index+1:]:
for line in source.lines[line_index + 1:]:
lines.append(space_prefix + line)
if excinfo is not None:
indent = 4 if short else self._getindent(source)
@ -546,13 +554,13 @@ class FormattedExcinfo(object):
# _repr() function, which is only reprlib.Repr in
# disguise, so is very configurable.
str_repr = self._saferepr(value)
#if len(str_repr) < 70 or not isinstance(value,
# if len(str_repr) < 70 or not isinstance(value,
# (list, tuple, dict)):
lines.append("%-10s = %s" %(name, str_repr))
#else:
lines.append("%-10s = %s" % (name, str_repr))
# else:
# self._line("%-10s =\\" % (name,))
# # XXX
# py.std.pprint.pprint(value, stream=self.excinfowriter)
# pprint.pprint(value, stream=self.excinfowriter)
return ReprLocals(lines)
def repr_traceback_entry(self, entry, excinfo=None):
@ -575,14 +583,14 @@ class FormattedExcinfo(object):
s = self.get_source(source, line_index, excinfo, short=short)
lines.extend(s)
if short:
message = "in %s" %(entry.name)
message = "in %s" % (entry.name)
else:
message = excinfo and excinfo.typename or ""
path = self._makepath(entry.path)
filelocrepr = ReprFileLocation(path, entry.lineno+1, message)
filelocrepr = ReprFileLocation(path, entry.lineno + 1, message)
localsrepr = None
if not short:
localsrepr = self.repr_locals(entry.locals)
localsrepr = self.repr_locals(entry.locals)
return ReprEntry(lines, reprargs, localsrepr, filelocrepr, style)
if excinfo:
lines.extend(self.get_exconly(excinfo, indent=4))
@ -645,7 +653,7 @@ class FormattedExcinfo(object):
traceback = traceback[:recursionindex + 1]
else:
extraline = None
return traceback, extraline
def repr_excinfo(self, excinfo):
@ -665,7 +673,7 @@ class FormattedExcinfo(object):
else:
# fallback to native repr if the exception doesn't have a traceback:
# ExceptionInfo objects require a full traceback to work
reprtraceback = ReprTracebackNative(py.std.traceback.format_exception(type(e), e, None))
reprtraceback = ReprTracebackNative(traceback.format_exception(type(e), e, None))
reprcrash = None
repr_chain += [(reprtraceback, reprcrash, descr)]
@ -673,7 +681,7 @@ class FormattedExcinfo(object):
e = e.__cause__
excinfo = ExceptionInfo((type(e), e, e.__traceback__)) if e.__traceback__ else None
descr = 'The above exception was the direct cause of the following exception:'
elif e.__context__ is not None:
elif (e.__context__ is not None and not e.__suppress_context__):
e = e.__context__
excinfo = ExceptionInfo((type(e), e, e.__traceback__)) if e.__traceback__ else None
descr = 'During handling of the above exception, another exception occurred:'
@ -699,7 +707,7 @@ class TerminalRepr(object):
return io.getvalue().strip()
def __repr__(self):
return "<%s instance at %0x>" %(self.__class__, id(self))
return "<%s instance at %0x>" % (self.__class__, id(self))
class ExceptionRepr(TerminalRepr):
@ -743,6 +751,7 @@ class ReprExceptionInfo(ExceptionRepr):
self.reprtraceback.toterminal(tw)
super(ReprExceptionInfo, self).toterminal(tw)
class ReprTraceback(TerminalRepr):
entrysep = "_ "
@ -758,7 +767,7 @@ class ReprTraceback(TerminalRepr):
tw.line("")
entry.toterminal(tw)
if i < len(self.reprentries) - 1:
next_entry = self.reprentries[i+1]
next_entry = self.reprentries[i + 1]
if entry.style == "long" or \
entry.style == "short" and next_entry.style == "long":
tw.sep(self.entrysep)
@ -766,12 +775,14 @@ class ReprTraceback(TerminalRepr):
if self.extraline:
tw.line(self.extraline)
class ReprTracebackNative(ReprTraceback):
def __init__(self, tblines):
self.style = "native"
self.reprentries = [ReprEntryNative(tblines)]
self.extraline = None
class ReprEntryNative(TerminalRepr):
style = "native"
@ -781,6 +792,7 @@ class ReprEntryNative(TerminalRepr):
def toterminal(self, tw):
tw.write("".join(self.lines))
class ReprEntry(TerminalRepr):
localssep = "_ "
@ -797,7 +809,7 @@ class ReprEntry(TerminalRepr):
for line in self.lines:
red = line.startswith("E ")
tw.line(line, bold=True, red=red)
#tw.line("")
# tw.line("")
return
if self.reprfuncargs:
self.reprfuncargs.toterminal(tw)
@ -805,7 +817,7 @@ class ReprEntry(TerminalRepr):
red = line.startswith("E ")
tw.line(line, bold=True, red=red)
if self.reprlocals:
#tw.sep(self.localssep, "Locals")
# tw.sep(self.localssep, "Locals")
tw.line("")
self.reprlocals.toterminal(tw)
if self.reprfileloc:
@ -818,6 +830,7 @@ class ReprEntry(TerminalRepr):
self.reprlocals,
self.reprfileloc)
class ReprFileLocation(TerminalRepr):
def __init__(self, path, lineno, message):
self.path = str(path)
@ -834,6 +847,7 @@ class ReprFileLocation(TerminalRepr):
tw.write(self.path, bold=True, red=True)
tw.line(":%s: %s" % (self.lineno, msg))
class ReprLocals(TerminalRepr):
def __init__(self, lines):
self.lines = lines
@ -842,6 +856,7 @@ class ReprLocals(TerminalRepr):
for line in self.lines:
tw.line(line)
class ReprFuncArgs(TerminalRepr):
def __init__(self, args):
self.args = args
@ -850,11 +865,11 @@ class ReprFuncArgs(TerminalRepr):
if self.args:
linesofar = ""
for name, value in self.args:
ns = "%s = %s" %(name, value)
ns = "%s = %s" % (safe_str(name), safe_str(value))
if len(ns) + len(linesofar) + 2 > tw.fullwidth:
if linesofar:
tw.line(linesofar)
linesofar = ns
linesofar = ns
else:
if linesofar:
linesofar += ", " + ns
@ -875,7 +890,7 @@ def getrawcode(obj, trycall=True):
obj = getattr(obj, 'f_code', obj)
obj = getattr(obj, '__code__', obj)
if trycall and not hasattr(obj, 'co_firstlineno'):
if hasattr(obj, '__call__') and not py.std.inspect.isclass(obj):
if hasattr(obj, '__call__') and not inspect.isclass(obj):
x = getrawcode(obj.__call__, trycall=False)
if hasattr(x, 'co_firstlineno'):
return x

View File

@ -1,17 +1,16 @@
from __future__ import absolute_import, division, generators, print_function
import ast
from ast import PyCF_ONLY_AST as _AST_FLAG
from bisect import bisect_right
import linecache
import sys
import inspect, tokenize
import six
import inspect
import tokenize
import py
cpy_compile = compile
try:
import _ast
from _ast import PyCF_ONLY_AST as _AST_FLAG
except ImportError:
_AST_FLAG = 0
_ast = None
cpy_compile = compile
class Source(object):
@ -19,6 +18,7 @@ class Source(object):
possibly deindenting it.
"""
_compilecounter = 0
def __init__(self, *parts, **kwargs):
self.lines = lines = []
de = kwargs.get('deindent', True)
@ -26,11 +26,11 @@ class Source(object):
for part in parts:
if not part:
partlines = []
if isinstance(part, Source):
elif isinstance(part, Source):
partlines = part.lines
elif isinstance(part, (tuple, list)):
partlines = [x.rstrip("\n") for x in part]
elif isinstance(part, py.builtin._basestring):
elif isinstance(part, six.string_types):
partlines = part.split('\n')
if rstrip:
while partlines:
@ -73,7 +73,7 @@ class Source(object):
start, end = 0, len(self)
while start < end and not self.lines[start].strip():
start += 1
while end > start and not self.lines[end-1].strip():
while end > start and not self.lines[end - 1].strip():
end -= 1
source = Source()
source.lines[:] = self.lines[start:end]
@ -86,8 +86,8 @@ class Source(object):
before = Source(before)
after = Source(after)
newsource = Source()
lines = [ (indent + line) for line in self.lines]
newsource.lines = before.lines + lines + after.lines
lines = [(indent + line) for line in self.lines]
newsource.lines = before.lines + lines + after.lines
return newsource
def indent(self, indent=' ' * 4):
@ -95,17 +95,17 @@ class Source(object):
all lines indented by the given indent-string.
"""
newsource = Source()
newsource.lines = [(indent+line) for line in self.lines]
newsource.lines = [(indent + line) for line in self.lines]
return newsource
def getstatement(self, lineno, assertion=False):
def getstatement(self, lineno):
""" return Source statement which contains the
given linenumber (counted from 0).
"""
start, end = self.getstatementrange(lineno, assertion)
start, end = self.getstatementrange(lineno)
return self[start:end]
def getstatementrange(self, lineno, assertion=False):
def getstatementrange(self, lineno):
""" return (start, end) tuple which spans the minimal
statement region which containing the given lineno.
"""
@ -131,20 +131,15 @@ class Source(object):
""" return True if source is parseable, heuristically
deindenting it by default.
"""
try:
import parser
except ImportError:
syntax_checker = lambda x: compile(x, 'asd', 'exec')
else:
syntax_checker = parser.suite
from parser import suite as syntax_checker
if deindent:
source = str(self.deindent())
else:
source = str(self)
try:
#compile(source+'\n', "x", "exec")
syntax_checker(source+'\n')
# compile(source+'\n', "x", "exec")
syntax_checker(source + '\n')
except KeyboardInterrupt:
raise
except Exception:
@ -164,8 +159,8 @@ class Source(object):
"""
if not filename or py.path.local(filename).check(file=0):
if _genframe is None:
_genframe = sys._getframe(1) # the caller
fn,lineno = _genframe.f_code.co_filename, _genframe.f_lineno
_genframe = sys._getframe(1) # the caller
fn, lineno = _genframe.f_code.co_filename, _genframe.f_lineno
base = "<%d-codegen " % self._compilecounter
self.__class__._compilecounter += 1
if not filename:
@ -180,7 +175,7 @@ class Source(object):
# re-represent syntax errors from parsing python strings
msglines = self.lines[:ex.lineno]
if ex.offset:
msglines.append(" "*ex.offset + '^')
msglines.append(" " * ex.offset + '^')
msglines.append("(code was compiled probably from here: %s)" % filename)
newex = SyntaxError('\n'.join(msglines))
newex.offset = ex.offset
@ -191,24 +186,24 @@ class Source(object):
if flag & _AST_FLAG:
return co
lines = [(x + "\n") for x in self.lines]
py.std.linecache.cache[filename] = (1, None, lines, filename)
linecache.cache[filename] = (1, None, lines, filename)
return co
#
# public API shortcut functions
#
def compile_(source, filename=None, mode='exec', flags=
generators.compiler_flag, dont_inherit=0):
def compile_(source, filename=None, mode='exec', flags=generators.compiler_flag, dont_inherit=0):
""" compile the given source to a raw code object,
and maintain an internal cache which allows later
retrieval of the source code for the code object
and any recursively created code objects.
"""
if _ast is not None and isinstance(source, _ast.AST):
if isinstance(source, ast.AST):
# XXX should Source support having AST?
return cpy_compile(source, filename, mode, flags, dont_inherit)
_genframe = sys._getframe(1) # the caller
_genframe = sys._getframe(1) # the caller
s = Source(source)
co = s.compile(filename, mode, flags, _genframe=_genframe)
return co
@ -218,13 +213,12 @@ def getfslineno(obj):
""" Return source location (path, lineno) for the given object.
If the source cannot be determined return ("", -1)
"""
import _pytest._code
from .code import Code
try:
code = _pytest._code.Code(obj)
code = Code(obj)
except TypeError:
try:
fn = (py.std.inspect.getsourcefile(obj) or
py.std.inspect.getfile(obj))
fn = inspect.getsourcefile(obj) or inspect.getfile(obj)
except TypeError:
return "", -1
@ -245,12 +239,13 @@ def getfslineno(obj):
# helper functions
#
def findsource(obj):
try:
sourcelines, lineno = py.std.inspect.findsource(obj)
sourcelines, lineno = inspect.findsource(obj)
except py.builtin._sysex:
raise
except:
except: # noqa
return None, -1
source = Source()
source.lines = [line.rstrip() for line in sourcelines]
@ -258,8 +253,8 @@ def findsource(obj):
def getsource(obj, **kwargs):
import _pytest._code
obj = _pytest._code.getrawcode(obj)
from .code import getrawcode
obj = getrawcode(obj)
try:
strsrc = inspect.getsource(obj)
except IndentationError:
@ -274,7 +269,7 @@ def deindent(lines, offset=None):
line = line.expandtabs()
s = line.lstrip()
if s:
offset = len(line)-len(s)
offset = len(line) - len(s)
break
else:
offset = 0
@ -285,19 +280,17 @@ def deindent(lines, offset=None):
def readline_generator(lines):
for line in lines:
yield line + '\n'
while True:
yield ''
it = readline_generator(lines)
try:
for _, _, (sline, _), (eline, _), _ in tokenize.generate_tokens(lambda: next(it)):
if sline > len(lines):
break # End of input reached
break # End of input reached
if sline > len(newlines):
line = lines[sline - 1].expandtabs()
if line.lstrip() and line[:offset].isspace():
line = line[offset:] # Deindent
line = line[offset:] # Deindent
newlines.append(line)
for i in range(sline, eline):
@ -315,35 +308,30 @@ def get_statement_startend2(lineno, node):
import ast
# flatten all statements and except handlers into one lineno-list
# AST's line numbers start indexing at 1
l = []
values = []
for x in ast.walk(node):
if isinstance(x, _ast.stmt) or isinstance(x, _ast.ExceptHandler):
l.append(x.lineno - 1)
for name in "finalbody", "orelse":
if isinstance(x, (ast.stmt, ast.ExceptHandler)):
values.append(x.lineno - 1)
for name in ("finalbody", "orelse"):
val = getattr(x, name, None)
if val:
# treat the finally/orelse part as its own statement
l.append(val[0].lineno - 1 - 1)
l.sort()
insert_index = bisect_right(l, lineno)
start = l[insert_index - 1]
if insert_index >= len(l):
values.append(val[0].lineno - 1 - 1)
values.sort()
insert_index = bisect_right(values, lineno)
start = values[insert_index - 1]
if insert_index >= len(values):
end = None
else:
end = l[insert_index]
end = values[insert_index]
return start, end
def getstatementrange_ast(lineno, source, assertion=False, astnode=None):
if astnode is None:
content = str(source)
if sys.version_info < (2,7):
content += "\n"
try:
astnode = compile(content, "source", "exec", 1024) # 1024 for AST
except ValueError:
start, end = getstatementrange_old(lineno, source, assertion)
return None, start, end
astnode = compile(content, "source", "exec", 1024) # 1024 for AST
start, end = get_statement_startend2(lineno, astnode)
# we need to correct the end:
# - ast-parsing strips comments
@ -375,40 +363,3 @@ def getstatementrange_ast(lineno, source, assertion=False, astnode=None):
else:
break
return astnode, start, end
def getstatementrange_old(lineno, source, assertion=False):
""" return (start, end) tuple which spans the minimal
statement region which containing the given lineno.
raise an IndexError if no such statementrange can be found.
"""
# XXX this logic is only used on python2.4 and below
# 1. find the start of the statement
from codeop import compile_command
for start in range(lineno, -1, -1):
if assertion:
line = source.lines[start]
# the following lines are not fully tested, change with care
if 'super' in line and 'self' in line and '__init__' in line:
raise IndexError("likely a subclass")
if "assert" not in line and "raise" not in line:
continue
trylines = source.lines[start:lineno+1]
# quick hack to prepare parsing an indented line with
# compile_command() (which errors on "return" outside defs)
trylines.insert(0, 'def xxx():')
trysource = '\n '.join(trylines)
# ^ space here
try:
compile_command(trysource)
except (SyntaxError, OverflowError, ValueError):
continue
# 2. find the end of the statement
for end in range(lineno+1, len(source)+1):
trysource = source[start:end]
if trysource.isparseable():
return start, end
raise SyntaxError("no valid source range around line %d " % (lineno,))

View File

@ -1,11 +0,0 @@
"""
imports symbols from vendored "pluggy" if available, otherwise
falls back to importing "pluggy" from the default namespace.
"""
from __future__ import absolute_import, division, print_function
try:
from _pytest.vendored_packages.pluggy import * # noqa
from _pytest.vendored_packages.pluggy import __version__ # noqa
except ImportError:
from pluggy import * # noqa
from pluggy import __version__ # noqa

View File

@ -2,8 +2,8 @@
support for presenting detailed information in failing assertions.
"""
from __future__ import absolute_import, division, print_function
import py
import sys
import six
from _pytest.assertion import util
from _pytest.assertion import rewrite
@ -25,7 +25,6 @@ def pytest_addoption(parser):
expression information.""")
def register_assert_rewrite(*names):
"""Register one or more module names to be rewritten on import.
@ -57,7 +56,7 @@ class DummyRewriteHook(object):
pass
class AssertionState:
class AssertionState(object):
"""State for the assertion plugin."""
def __init__(self, config, mode):
@ -68,10 +67,8 @@ class AssertionState:
def install_importhook(config):
"""Try to install the rewrite hook, raise SystemError if it fails."""
# Both Jython and CPython 2.6.0 have AST bugs that make the
# assertion rewriting hook malfunction.
if (sys.platform.startswith('java') or
sys.version_info[:3] == (2, 6, 0)):
# Jython has an AST bug that make the assertion rewriting hook malfunction.
if (sys.platform.startswith('java')):
raise SystemError('rewrite not supported')
config._assertstate = AssertionState(config, 'rewrite')
@ -127,7 +124,7 @@ def pytest_runtest_setup(item):
if new_expl:
new_expl = truncate.truncate_if_required(new_expl, item)
new_expl = [line.replace("\n", "\\n") for line in new_expl]
res = py.builtin._totext("\n~").join(new_expl)
res = six.text_type("\n~").join(new_expl)
if item.config.getvalue("assertmode") == "rewrite":
res = res.replace("%", "%%")
return res

View File

@ -1,18 +1,20 @@
"""Rewrite assertion AST to produce nice error messages"""
from __future__ import absolute_import, division, print_function
import ast
import _ast
import errno
import itertools
import imp
import marshal
import os
import re
import six
import struct
import sys
import types
import atomicwrites
import py
from _pytest.assertion import util
@ -33,13 +35,13 @@ else:
PYC_EXT = ".py" + (__debug__ and "c" or "o")
PYC_TAIL = "." + PYTEST_TAG + PYC_EXT
REWRITE_NEWLINES = sys.version_info[:2] != (2, 7) and sys.version_info < (3, 2)
ASCII_IS_DEFAULT_ENCODING = sys.version_info[0] < 3
if sys.version_info >= (3,5):
if sys.version_info >= (3, 5):
ast_Call = ast.Call
else:
ast_Call = lambda a,b,c: ast.Call(a, b, c, None, None)
def ast_Call(a, b, c):
return ast.Call(a, b, c, None, None)
class AssertionRewritingHook(object):
@ -140,7 +142,7 @@ class AssertionRewritingHook(object):
# Probably a SyntaxError in the test.
return None
if write:
_make_rewritten_pyc(state, source_stat, pyc, co)
_write_pyc(state, co, source_stat, pyc)
else:
state.trace("found cached rewritten pyc for %r" % (fn,))
self.modules[name] = co, pyc
@ -167,29 +169,31 @@ class AssertionRewritingHook(object):
return True
for marked in self._must_rewrite:
if name.startswith(marked):
if name == marked or name.startswith(marked + '.'):
state.trace("matched marked file %r (from %r)" % (name, marked))
return True
return False
def mark_rewrite(self, *names):
"""Mark import names as needing to be re-written.
"""Mark import names as needing to be rewritten.
The named module or package as well as any nested modules will
be re-written on import.
be rewritten on import.
"""
already_imported = set(names).intersection(set(sys.modules))
if already_imported:
for name in already_imported:
if name not in self._rewritten_names:
self._warn_already_imported(name)
already_imported = (set(names)
.intersection(sys.modules)
.difference(self._rewritten_names))
for name in already_imported:
if not AssertionRewriter.is_rewrite_disabled(
sys.modules[name].__doc__ or ""):
self._warn_already_imported(name)
self._must_rewrite.update(names)
def _warn_already_imported(self, name):
self.config.warn(
'P1',
'Module already imported so can not be re-written: %s' % name)
'Module already imported so cannot be rewritten: %s' % name)
def load_module(self, name):
# If there is an existing module object named 'fullname' in
@ -209,14 +213,12 @@ class AssertionRewritingHook(object):
mod.__cached__ = pyc
mod.__loader__ = self
py.builtin.exec_(co, mod.__dict__)
except:
except: # noqa
if name in sys.modules:
del sys.modules[name]
raise
return sys.modules[name]
def is_package(self, name):
try:
fd, fn, desc = imp.find_module(name)
@ -258,22 +260,21 @@ def _write_pyc(state, co, source_stat, pyc):
# sometime to be able to use imp.load_compiled to load them. (See
# the comment in load_module above.)
try:
fp = open(pyc, "wb")
except IOError:
err = sys.exc_info()[1].errno
state.trace("error writing pyc file at %s: errno=%s" %(pyc, err))
with atomicwrites.atomic_write(pyc, mode="wb", overwrite=True) as fp:
fp.write(imp.get_magic())
mtime = int(source_stat.mtime)
size = source_stat.size & 0xFFFFFFFF
fp.write(struct.pack("<ll", mtime, size))
if six.PY2:
marshal.dump(co, fp.file)
else:
marshal.dump(co, fp)
except EnvironmentError as e:
state.trace("error writing pyc file at %s: errno=%s" % (pyc, e.errno))
# we ignore any failure to write the cache file
# there are many reasons, permission-denied, __pycache__ being a
# file etc.
return False
try:
fp.write(imp.get_magic())
mtime = int(source_stat.mtime)
size = source_stat.size & 0xFFFFFFFF
fp.write(struct.pack("<ll", mtime, size))
marshal.dump(co, fp)
finally:
fp.close()
return True
@ -283,6 +284,7 @@ N = "\n".encode("utf-8")
cookie_re = re.compile(r"^[ \t\f]*#.*coding[:=][ \t]*[-\w.]+")
BOM_UTF8 = '\xef\xbb\xbf'
def _rewrite_test(config, fn):
"""Try to read and rewrite *fn* and return the code object."""
state = config._assertstate
@ -307,7 +309,7 @@ def _rewrite_test(config, fn):
end2 = source.find("\n", end1 + 1)
if (not source.startswith(BOM_UTF8) and
cookie_re.match(source[0:end1]) is None and
cookie_re.match(source[end1 + 1:end2]) is None):
cookie_re.match(source[end1 + 1:end2]) is None):
if hasattr(state, "_indecode"):
# encodings imported us again, so don't rewrite.
return None, None
@ -320,10 +322,6 @@ def _rewrite_test(config, fn):
return None, None
finally:
del state._indecode
# On Python versions which are not 2.7 and less than or equal to 3.1, the
# parser expects *nix newlines.
if REWRITE_NEWLINES:
source = source.replace(RN, N) + N
try:
tree = ast.parse(source)
except SyntaxError:
@ -340,18 +338,6 @@ def _rewrite_test(config, fn):
return None, None
return stat, co
def _make_rewritten_pyc(state, source_stat, pyc, co):
"""Try to dump rewritten code to *pyc*."""
if sys.platform.startswith("win"):
# Windows grants exclusive access to open files and doesn't have atomic
# rename, so just write into the final file.
_write_pyc(state, co, source_stat, pyc)
else:
# When not on windows, assume rename is atomic. Dump the code object
# into a file specific to this process and atomically replace it.
proc_pyc = pyc + "." + str(os.getpid())
if _write_pyc(state, co, source_stat, proc_pyc):
os.rename(proc_pyc, pyc)
def _read_pyc(source, pyc, trace=lambda x: None):
"""Possibly read a pytest pyc containing rewritten code.
@ -403,14 +389,15 @@ def _saferepr(obj):
"""
repr = py.io.saferepr(obj)
if py.builtin._istext(repr):
t = py.builtin.text
if isinstance(repr, six.text_type):
t = six.text_type
else:
t = py.builtin.bytes
t = six.binary_type
return repr.replace(t("\n"), t("\\n"))
from _pytest.assertion.util import format_explanation as _format_explanation # noqa
from _pytest.assertion.util import format_explanation as _format_explanation # noqa
def _format_assertmsg(obj):
"""Format the custom assertion message given.
@ -424,32 +411,35 @@ def _format_assertmsg(obj):
# contains a newline it gets escaped, however if an object has a
# .__repr__() which contains newlines it does not get escaped.
# However in either case we want to preserve the newline.
if py.builtin._istext(obj) or py.builtin._isbytes(obj):
if isinstance(obj, six.text_type) or isinstance(obj, six.binary_type):
s = obj
is_repr = False
else:
s = py.io.saferepr(obj)
is_repr = True
if py.builtin._istext(s):
t = py.builtin.text
if isinstance(s, six.text_type):
t = six.text_type
else:
t = py.builtin.bytes
t = six.binary_type
s = s.replace(t("\n"), t("\n~")).replace(t("%"), t("%%"))
if is_repr:
s = s.replace(t("\\n"), t("\n~"))
return s
def _should_repr_global_name(obj):
return not hasattr(obj, "__name__") and not py.builtin.callable(obj)
return not hasattr(obj, "__name__") and not callable(obj)
def _format_boolop(explanations, is_or):
explanation = "(" + (is_or and " or " or " and ").join(explanations) + ")"
if py.builtin._istext(explanation):
t = py.builtin.text
if isinstance(explanation, six.text_type):
t = six.text_type
else:
t = py.builtin.bytes
t = six.binary_type
return explanation.replace(t('%'), t('%%'))
def _call_reprcompare(ops, results, expls, each_obj):
for i, res, expl in zip(range(len(ops)), results, expls):
try:
@ -483,7 +473,7 @@ binop_map = {
ast.Mult: "*",
ast.Div: "/",
ast.FloorDiv: "//",
ast.Mod: "%%", # escaped for string formatting
ast.Mod: "%%", # escaped for string formatting
ast.Eq: "==",
ast.NotEq: "!=",
ast.Lt: "<",
@ -527,7 +517,7 @@ class AssertionRewriter(ast.NodeVisitor):
"""Assertion rewriting implementation.
The main entrypoint is to call .run() with an ast.Module instance,
this will then find all the assert statements and re-write them to
this will then find all the assert statements and rewrite them to
provide intermediate values and a detailed assertion error. See
http://pybites.blogspot.be/2011/07/behind-scenes-of-pytests-new-assertion.html
for an overview of how this works.
@ -536,7 +526,7 @@ class AssertionRewriter(ast.NodeVisitor):
statements in an ast.Module and for each ast.Assert statement it
finds call .visit() with it. Then .visit_Assert() takes over and
is responsible for creating new ast statements to replace the
original assert statement: it re-writes the test of an assertion
original assert statement: it rewrites the test of an assertion
to provide intermediate values and replace it with an if statement
which raises an assertion error with a detailed explanation in
case the expression is false.
@ -589,23 +579,26 @@ class AssertionRewriter(ast.NodeVisitor):
# docstrings and __future__ imports.
aliases = [ast.alias(py.builtin.builtins.__name__, "@py_builtins"),
ast.alias("_pytest.assertion.rewrite", "@pytest_ar")]
expect_docstring = True
doc = getattr(mod, "docstring", None)
expect_docstring = doc is None
if doc is not None and self.is_rewrite_disabled(doc):
return
pos = 0
lineno = 0
lineno = 1
for item in mod.body:
if (expect_docstring and isinstance(item, ast.Expr) and
isinstance(item.value, ast.Str)):
doc = item.value.s
if "PYTEST_DONT_REWRITE" in doc:
# The module has disabled assertion rewriting.
if self.is_rewrite_disabled(doc):
return
lineno += len(doc) - 1
expect_docstring = False
elif (not isinstance(item, ast.ImportFrom) or item.level > 0 or
item.module != "__future__"):
lineno = item.lineno
break
pos += 1
else:
lineno = item.lineno
imports = [ast.Import([alias], lineno=lineno, col_offset=0)
for alias in aliases]
mod.body[pos:pos] = imports
@ -631,6 +624,10 @@ class AssertionRewriter(ast.NodeVisitor):
not isinstance(field, ast.expr)):
nodes.append(field)
@staticmethod
def is_rewrite_disabled(docstring):
return "PYTEST_DONT_REWRITE" in docstring
def variable(self):
"""Get a new variable."""
# Use a character invalid in python identifiers to avoid clashing.
@ -714,7 +711,7 @@ class AssertionRewriter(ast.NodeVisitor):
def visit_Assert(self, assert_):
"""Return the AST statements to replace the ast.Assert instance.
This re-writes the test of an assertion to provide
This rewrites the test of an assertion to provide
intermediate values and replace it with an if statement which
raises an assertion error with a detailed explanation in case
the expression is false.
@ -723,7 +720,7 @@ class AssertionRewriter(ast.NodeVisitor):
if isinstance(assert_.test, ast.Tuple) and self.config is not None:
fslocation = (self.module_path, assert_.lineno)
self.config.warn('R1', 'assertion is always true, perhaps '
'remove parentheses?', fslocation=fslocation)
'remove parentheses?', fslocation=fslocation)
self.statements = []
self.variables = []
self.variable_counter = itertools.count()
@ -787,7 +784,7 @@ class AssertionRewriter(ast.NodeVisitor):
if i:
fail_inner = []
# cond is set in a prior loop iteration below
self.on_failure.append(ast.If(cond, fail_inner, [])) # noqa
self.on_failure.append(ast.If(cond, fail_inner, [])) # noqa
self.on_failure = fail_inner
self.push_format_context()
res, expl = self.visit(v)
@ -839,7 +836,7 @@ class AssertionRewriter(ast.NodeVisitor):
new_kwargs.append(ast.keyword(keyword.arg, res))
if keyword.arg:
arg_expls.append(keyword.arg + "=" + expl)
else: ## **args have `arg` keywords with an .arg of None
else: # **args have `arg` keywords with an .arg of None
arg_expls.append("**" + expl)
expl = "%s(%s)" % (func_expl, ', '.join(arg_expls))
@ -893,7 +890,6 @@ class AssertionRewriter(ast.NodeVisitor):
else:
visit_Call = visit_Call_legacy
def visit_Attribute(self, attr):
if not isinstance(attr.ctx, ast.Load):
return self.generic_visit(attr)
@ -907,7 +903,7 @@ class AssertionRewriter(ast.NodeVisitor):
def visit_Compare(self, comp):
self.push_format_context()
left_res, left_expl = self.visit(comp.left)
if isinstance(comp.left, (_ast.Compare, _ast.BoolOp)):
if isinstance(comp.left, (ast.Compare, ast.BoolOp)):
left_expl = "({0})".format(left_expl)
res_variables = [self.variable() for i in range(len(comp.ops))]
load_names = [ast.Name(v, ast.Load()) for v in res_variables]
@ -918,7 +914,7 @@ class AssertionRewriter(ast.NodeVisitor):
results = [left_res]
for i, op, next_operand in it:
next_res, next_expl = self.visit(next_operand)
if isinstance(next_operand, (_ast.Compare, _ast.BoolOp)):
if isinstance(next_operand, (ast.Compare, ast.BoolOp)):
next_expl = "({0})".format(next_expl)
results.append(next_res)
sym = binop_map[op.__class__]

View File

@ -7,7 +7,7 @@ Current default behaviour is to truncate assertion explanations at
from __future__ import absolute_import, division, print_function
import os
import py
import six
DEFAULT_MAX_LINES = 8
@ -74,8 +74,8 @@ def _truncate_explanation(input_lines, max_lines=None, max_chars=None):
msg += ' ({0} lines hidden)'.format(truncated_line_count)
msg += ", {0}" .format(USAGE_MSG)
truncated_explanation.extend([
py.builtin._totext(""),
py.builtin._totext(msg),
six.text_type(""),
six.text_type(msg),
])
return truncated_explanation

View File

@ -4,13 +4,10 @@ import pprint
import _pytest._code
import py
try:
from collections import Sequence
except ImportError:
Sequence = list
import six
from ..compat import Sequence
u = py.builtin._totext
u = six.text_type
# The _reprcompare attribute on the util module is used by the new assertion
# interpretation code and assertion rewriter to detect this plugin was
@ -53,11 +50,11 @@ def _split_explanation(explanation):
"""
raw_lines = (explanation or u('')).split('\n')
lines = [raw_lines[0]]
for l in raw_lines[1:]:
if l and l[0] in ['{', '}', '~', '>']:
lines.append(l)
for values in raw_lines[1:]:
if values and values[0] in ['{', '}', '~', '>']:
lines.append(values)
else:
lines[-1] += '\\n' + l
lines[-1] += '\\n' + values
return lines
@ -82,7 +79,7 @@ def _format_lines(lines):
stack.append(len(result))
stackcnt[-1] += 1
stackcnt.append(0)
result.append(u(' +') + u(' ')*(len(stack)-1) + s + line[1:])
result.append(u(' +') + u(' ') * (len(stack) - 1) + s + line[1:])
elif line.startswith('}'):
stack.pop()
stackcnt.pop()
@ -91,7 +88,7 @@ def _format_lines(lines):
assert line[0] in ['~', '>']
stack[-1] += 1
indent = len(stack) if line.startswith('~') else len(stack) - 1
result.append(u(' ')*indent + line[1:])
result.append(u(' ') * indent + line[1:])
assert len(stack) == 1
return result
@ -106,16 +103,22 @@ except NameError:
def assertrepr_compare(config, op, left, right):
"""Return specialised explanations for some operators/operands"""
width = 80 - 15 - len(op) - 2 # 15 chars indentation, 1 space around op
left_repr = py.io.saferepr(left, maxsize=int(width//2))
right_repr = py.io.saferepr(right, maxsize=width-len(left_repr))
left_repr = py.io.saferepr(left, maxsize=int(width // 2))
right_repr = py.io.saferepr(right, maxsize=width - len(left_repr))
summary = u('%s %s %s') % (ecu(left_repr), op, ecu(right_repr))
issequence = lambda x: (isinstance(x, (list, tuple, Sequence)) and
not isinstance(x, basestring))
istext = lambda x: isinstance(x, basestring)
isdict = lambda x: isinstance(x, dict)
isset = lambda x: isinstance(x, (set, frozenset))
def issequence(x):
return isinstance(x, Sequence) and not isinstance(x, basestring)
def istext(x):
return isinstance(x, basestring)
def isdict(x):
return isinstance(x, dict)
def isset(x):
return isinstance(x, (set, frozenset))
def isiterable(obj):
try:
@ -168,9 +171,9 @@ def _diff_text(left, right, verbose=False):
"""
from difflib import ndiff
explanation = []
if isinstance(left, py.builtin.bytes):
if isinstance(left, six.binary_type):
left = u(repr(left)[1:-1]).replace(r'\n', '\n')
if isinstance(right, py.builtin.bytes):
if isinstance(right, six.binary_type):
right = u(repr(right)[1:-1]).replace(r'\n', '\n')
if not verbose:
i = 0 # just in case left or right has zero length
@ -285,7 +288,7 @@ def _compare_eq_dict(left, right, verbose=False):
def _notin_text(term, text, verbose=False):
index = text.find(term)
head = text[:index]
tail = text[index+len(term):]
tail = text[index + len(term):]
correct_text = head + tail
diff = _diff_text(correct_text, text, verbose)
newdiff = [u('%s is contained here:') % py.io.saferepr(term, maxsize=42)]

View File

@ -5,23 +5,39 @@ the name cache was not chosen to ensure pluggy automatically
ignores the external pytest-cache
"""
from __future__ import absolute_import, division, print_function
from collections import OrderedDict
import py
import six
import pytest
import json
import os
from os.path import sep as _sep, altsep as _altsep
class Cache(object):
def __init__(self, config):
self.config = config
self._cachedir = config.rootdir.join(".cache")
self._cachedir = Cache.cache_dir_from_config(config)
self.trace = config.trace.root.get("cache")
if config.getvalue("cacheclear"):
if config.getoption("cacheclear"):
self.trace("clearing cachedir")
if self._cachedir.check():
self._cachedir.remove()
self._cachedir.mkdir()
@staticmethod
def cache_dir_from_config(config):
cache_dir = config.getini("cache_dir")
cache_dir = os.path.expanduser(cache_dir)
cache_dir = os.path.expandvars(cache_dir)
if os.path.isabs(cache_dir):
return py.path.local(cache_dir)
else:
return config.rootdir.join(cache_dir)
def makedir(self, name):
""" return a directory path object with the given name. If the
directory does not yet exist, it will be created. You can use it
@ -87,33 +103,35 @@ class Cache(object):
json.dump(value, f, indent=2, sort_keys=True)
class LFPlugin:
class LFPlugin(object):
""" Plugin which implements the --lf (run last-failing) option """
def __init__(self, config):
self.config = config
active_keys = 'lf', 'failedfirst'
self.active = any(config.getvalue(key) for key in active_keys)
if self.active:
self.lastfailed = config.cache.get("cache/lastfailed", {})
else:
self.lastfailed = {}
self.active = any(config.getoption(key) for key in active_keys)
self.lastfailed = config.cache.get("cache/lastfailed", {})
self._previously_failed_count = None
self._no_failures_behavior = self.config.getoption('last_failed_no_failures')
def pytest_report_header(self):
def pytest_report_collectionfinish(self):
if self.active:
if not self.lastfailed:
mode = "run all (no recorded failures)"
if not self._previously_failed_count:
mode = "run {} (no recorded failures)".format(self._no_failures_behavior)
else:
mode = "rerun last %d failures%s" % (
len(self.lastfailed),
" first" if self.config.getvalue("failedfirst") else "")
noun = 'failure' if self._previously_failed_count == 1 else 'failures'
suffix = " first" if self.config.getoption(
"failedfirst") else ""
mode = "rerun previous {count} {noun}{suffix}".format(
count=self._previously_failed_count, suffix=suffix, noun=noun
)
return "run-last-failure: %s" % mode
def pytest_runtest_logreport(self, report):
if report.failed and "xfail" not in report.keywords:
if (report.when == 'call' and report.passed) or report.skipped:
self.lastfailed.pop(report.nodeid, None)
elif report.failed:
self.lastfailed[report.nodeid] = True
elif not report.failed:
if report.when == "call":
self.lastfailed.pop(report.nodeid, None)
def pytest_collectreport(self, report):
passed = report.outcome in ('passed', 'skipped')
@ -127,33 +145,72 @@ class LFPlugin:
self.lastfailed[report.nodeid] = True
def pytest_collection_modifyitems(self, session, config, items):
if self.active and self.lastfailed:
previously_failed = []
previously_passed = []
for item in items:
if item.nodeid in self.lastfailed:
previously_failed.append(item)
if self.active:
if self.lastfailed:
previously_failed = []
previously_passed = []
for item in items:
if item.nodeid in self.lastfailed:
previously_failed.append(item)
else:
previously_passed.append(item)
self._previously_failed_count = len(previously_failed)
if not previously_failed:
# running a subset of all tests with recorded failures outside
# of the set of tests currently executing
return
if self.config.getoption("lf"):
items[:] = previously_failed
config.hook.pytest_deselected(items=previously_passed)
else:
previously_passed.append(item)
if not previously_failed and previously_passed:
# running a subset of all tests with recorded failures outside
# of the set of tests currently executing
pass
elif self.config.getvalue("lf"):
items[:] = previously_failed
config.hook.pytest_deselected(items=previously_passed)
else:
items[:] = previously_failed + previously_passed
items[:] = previously_failed + previously_passed
elif self._no_failures_behavior == 'none':
config.hook.pytest_deselected(items=items)
items[:] = []
def pytest_sessionfinish(self, session):
config = self.config
if config.getvalue("cacheshow") or hasattr(config, "slaveinput"):
if config.getoption("cacheshow") or hasattr(config, "slaveinput"):
return
prev_failed = config.cache.get("cache/lastfailed", None) is not None
if (session.testscollected and prev_failed) or self.lastfailed:
saved_lastfailed = config.cache.get("cache/lastfailed", {})
if saved_lastfailed != self.lastfailed:
config.cache.set("cache/lastfailed", self.lastfailed)
class NFPlugin(object):
""" Plugin which implements the --nf (run new-first) option """
def __init__(self, config):
self.config = config
self.active = config.option.newfirst
self.cached_nodeids = config.cache.get("cache/nodeids", [])
def pytest_collection_modifyitems(self, session, config, items):
if self.active:
new_items = OrderedDict()
other_items = OrderedDict()
for item in items:
if item.nodeid not in self.cached_nodeids:
new_items[item.nodeid] = item
else:
other_items[item.nodeid] = item
items[:] = self._get_increasing_order(six.itervalues(new_items)) + \
self._get_increasing_order(six.itervalues(other_items))
self.cached_nodeids = [x.nodeid for x in items if isinstance(x, pytest.Item)]
def _get_increasing_order(self, items):
return sorted(items, key=lambda item: item.fspath.mtime(), reverse=True)
def pytest_sessionfinish(self, session):
config = self.config
if config.getoption("cacheshow") or hasattr(config, "slaveinput"):
return
config.cache.set("cache/nodeids", self.cached_nodeids)
def pytest_addoption(parser):
group = parser.getgroup("general")
group.addoption(
@ -165,12 +222,25 @@ def pytest_addoption(parser):
help="run all tests but run the last failures first. "
"This may re-order tests and thus lead to "
"repeated fixture setup/teardown")
group.addoption(
'--nf', '--new-first', action='store_true', dest="newfirst",
help="run tests from new files first, then the rest of the tests "
"sorted by file mtime")
group.addoption(
'--cache-show', action='store_true', dest="cacheshow",
help="show cache contents, don't perform collection or tests")
group.addoption(
'--cache-clear', action='store_true', dest="cacheclear",
help="remove all cache contents at start of test run.")
parser.addini(
"cache_dir", default='.pytest_cache',
help="cache directory path.")
group.addoption(
'--lfnf', '--last-failed-no-failures', action='store',
dest='last_failed_no_failures', choices=('all', 'none'), default='all',
help='change the behavior when no test failed in the last run or no '
'information about the last failures was found in the cache'
)
def pytest_cmdline_main(config):
@ -179,11 +249,11 @@ def pytest_cmdline_main(config):
return wrap_session(config, cacheshow)
@pytest.hookimpl(tryfirst=True)
def pytest_configure(config):
config.cache = Cache(config)
config.pluginmanager.register(LFPlugin(config), "lfplugin")
config.pluginmanager.register(NFPlugin(config), "nfplugin")
@pytest.fixture
@ -224,7 +294,7 @@ def cacheshow(config, session):
val = config.cache.get(key, dummy)
if val is dummy:
tw.line("%s contains unreadable content, "
"will be ignored" % key)
"will be ignored" % key)
else:
tw.line("%s contains:" % key)
stream = py.io.TextIO()
@ -236,7 +306,7 @@ def cacheshow(config, session):
if ddir.isdir() and ddir.listdir():
tw.sep("-", "cache directories")
for p in sorted(basedir.join("d").visit()):
#if p.check(dir=1):
# if p.check(dir=1):
# print("%s/" % p.relto(basedir))
if p.isfile():
key = p.relto(basedir)

View File

@ -4,6 +4,7 @@ per-test stdout/stderr capturing mechanism.
"""
from __future__ import absolute_import, division, print_function
import collections
import contextlib
import sys
import os
@ -11,11 +12,10 @@ import io
from io import UnsupportedOperation
from tempfile import TemporaryFile
import py
import six
import pytest
from _pytest.compat import CaptureIO
unicode = py.builtin.text
patchsysdict = {0: 'stdin', 1: 'stdout', 2: 'stderr'}
@ -36,14 +36,15 @@ def pytest_addoption(parser):
def pytest_load_initial_conftests(early_config, parser, args):
ns = early_config.known_args_namespace
if ns.capture == "fd":
_py36_windowsconsoleio_workaround()
_py36_windowsconsoleio_workaround(sys.stdout)
_colorama_workaround()
_readline_workaround()
pluginmanager = early_config.pluginmanager
capman = CaptureManager(ns.capture)
pluginmanager.register(capman, "capturemanager")
# make sure that capturemanager is properly reset at final shutdown
early_config.add_cleanup(capman.reset_capturings)
early_config.add_cleanup(capman.stop_global_capturing)
# make sure logging does not raise exceptions at the end
def silence_logging_at_shutdown():
@ -52,17 +53,30 @@ def pytest_load_initial_conftests(early_config, parser, args):
early_config.add_cleanup(silence_logging_at_shutdown)
# finally trigger conftest loading but while capturing (issue93)
capman.init_capturings()
capman.start_global_capturing()
outcome = yield
out, err = capman.suspendcapture()
out, err = capman.suspend_global_capture()
if outcome.excinfo is not None:
sys.stdout.write(out)
sys.stderr.write(err)
class CaptureManager:
class CaptureManager(object):
"""
Capture plugin, manages that the appropriate capture method is enabled/disabled during collection and each
test phase (setup, call, teardown). After each of those points, the captured output is obtained and
attached to the collection/runtest report.
There are two levels of capture:
* global: which is enabled by default and can be suppressed by the ``-s`` option. This is always enabled/disabled
during collection and each test phase.
* fixture: when a test function or one of its fixture depend on the ``capsys`` or ``capfd`` fixtures. In this
case special handling is needed to ensure the fixtures take precedence over the global capture.
"""
def __init__(self, method):
self._method = method
self._global_capturing = None
def _getcapture(self, method):
if method == "fd":
@ -74,23 +88,24 @@ class CaptureManager:
else:
raise ValueError("unknown capturing method: %r" % method)
def init_capturings(self):
assert not hasattr(self, "_capturing")
self._capturing = self._getcapture(self._method)
self._capturing.start_capturing()
def start_global_capturing(self):
assert self._global_capturing is None
self._global_capturing = self._getcapture(self._method)
self._global_capturing.start_capturing()
def reset_capturings(self):
cap = self.__dict__.pop("_capturing", None)
if cap is not None:
cap.pop_outerr_to_orig()
cap.stop_capturing()
def stop_global_capturing(self):
if self._global_capturing is not None:
self._global_capturing.pop_outerr_to_orig()
self._global_capturing.stop_capturing()
self._global_capturing = None
def resumecapture(self):
self._capturing.resume_capturing()
def resume_global_capture(self):
self._global_capturing.resume_capturing()
def suspendcapture(self, in_=False):
self.deactivate_funcargs()
cap = getattr(self, "_capturing", None)
def suspend_global_capture(self, item=None, in_=False):
if item is not None:
self.deactivate_fixture(item)
cap = getattr(self, "_global_capturing", None)
if cap is not None:
try:
outerr = cap.readouterr()
@ -98,23 +113,26 @@ class CaptureManager:
cap.suspend_capturing(in_=in_)
return outerr
def activate_funcargs(self, pyfuncitem):
capfuncarg = pyfuncitem.__dict__.pop("_capfuncarg", None)
if capfuncarg is not None:
capfuncarg._start()
self._capfuncarg = capfuncarg
def activate_fixture(self, item):
"""If the current item is using ``capsys`` or ``capfd``, activate them so they take precedence over
the global capture.
"""
fixture = getattr(item, "_capture_fixture", None)
if fixture is not None:
fixture._start()
def deactivate_funcargs(self):
capfuncarg = self.__dict__.pop("_capfuncarg", None)
if capfuncarg is not None:
capfuncarg.close()
def deactivate_fixture(self, item):
"""Deactivates the ``capsys`` or ``capfd`` fixture of this item, if any."""
fixture = getattr(item, "_capture_fixture", None)
if fixture is not None:
fixture.close()
@pytest.hookimpl(hookwrapper=True)
def pytest_make_collect_report(self, collector):
if isinstance(collector, pytest.File):
self.resumecapture()
self.resume_global_capture()
outcome = yield
out, err = self.suspendcapture()
out, err = self.suspend_global_capture()
rep = outcome.get_result()
if out:
rep.sections.append(("Captured stdout", out))
@ -125,67 +143,139 @@ class CaptureManager:
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_setup(self, item):
self.resumecapture()
self.resume_global_capture()
# no need to activate a capture fixture because they activate themselves during creation; this
# only makes sense when a fixture uses a capture fixture, otherwise the capture fixture will
# be activated during pytest_runtest_call
yield
self.suspendcapture_item(item, "setup")
self.suspend_capture_item(item, "setup")
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(self, item):
self.resumecapture()
self.activate_funcargs(item)
self.resume_global_capture()
# it is important to activate this fixture during the call phase so it overwrites the "global"
# capture
self.activate_fixture(item)
yield
#self.deactivate_funcargs() called from suspendcapture()
self.suspendcapture_item(item, "call")
self.suspend_capture_item(item, "call")
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_teardown(self, item):
self.resumecapture()
self.resume_global_capture()
self.activate_fixture(item)
yield
self.suspendcapture_item(item, "teardown")
self.suspend_capture_item(item, "teardown")
@pytest.hookimpl(tryfirst=True)
def pytest_keyboard_interrupt(self, excinfo):
self.reset_capturings()
self.stop_global_capturing()
@pytest.hookimpl(tryfirst=True)
def pytest_internalerror(self, excinfo):
self.reset_capturings()
self.stop_global_capturing()
def suspendcapture_item(self, item, when, in_=False):
out, err = self.suspendcapture(in_=in_)
def suspend_capture_item(self, item, when, in_=False):
out, err = self.suspend_global_capture(item, in_=in_)
item.add_report_section(when, "stdout", out)
item.add_report_section(when, "stderr", err)
error_capsysfderror = "cannot use capsys and capfd at the same time"
capture_fixtures = {'capfd', 'capfdbinary', 'capsys', 'capsysbinary'}
def _ensure_only_one_capture_fixture(request, name):
fixtures = set(request.fixturenames) & capture_fixtures - set((name,))
if fixtures:
fixtures = sorted(fixtures)
fixtures = fixtures[0] if len(fixtures) == 1 else fixtures
raise request.raiseerror(
"cannot use {0} and {1} at the same time".format(
fixtures, name,
),
)
@pytest.fixture
def capsys(request):
"""Enable capturing of writes to sys.stdout/sys.stderr and make
"""Enable capturing of writes to ``sys.stdout`` and ``sys.stderr`` and make
captured output available via ``capsys.readouterr()`` method calls
which return a ``(out, err)`` tuple.
which return a ``(out, err)`` namedtuple. ``out`` and ``err`` will be ``text``
objects.
"""
if "capfd" in request.fixturenames:
raise request.raiseerror(error_capsysfderror)
request.node._capfuncarg = c = CaptureFixture(SysCapture, request)
return c
_ensure_only_one_capture_fixture(request, 'capsys')
with _install_capture_fixture_on_item(request, SysCapture) as fixture:
yield fixture
@pytest.fixture
def capsysbinary(request):
"""Enable capturing of writes to ``sys.stdout`` and ``sys.stderr`` and make
captured output available via ``capsys.readouterr()`` method calls
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be ``bytes``
objects.
"""
_ensure_only_one_capture_fixture(request, 'capsysbinary')
# Currently, the implementation uses the python3 specific `.buffer`
# property of CaptureIO.
if sys.version_info < (3,):
raise request.raiseerror('capsysbinary is only supported on python 3')
with _install_capture_fixture_on_item(request, SysCaptureBinary) as fixture:
yield fixture
@pytest.fixture
def capfd(request):
"""Enable capturing of writes to file descriptors 1 and 2 and make
"""Enable capturing of writes to file descriptors ``1`` and ``2`` and make
captured output available via ``capfd.readouterr()`` method calls
which return a ``(out, err)`` tuple.
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be ``text``
objects.
"""
if "capsys" in request.fixturenames:
request.raiseerror(error_capsysfderror)
_ensure_only_one_capture_fixture(request, 'capfd')
if not hasattr(os, 'dup'):
pytest.skip("capfd funcarg needs os.dup")
request.node._capfuncarg = c = CaptureFixture(FDCapture, request)
return c
pytest.skip("capfd fixture needs os.dup function which is not available in this system")
with _install_capture_fixture_on_item(request, FDCapture) as fixture:
yield fixture
class CaptureFixture:
@pytest.fixture
def capfdbinary(request):
"""Enable capturing of write to file descriptors 1 and 2 and make
captured output available via ``capfdbinary.readouterr`` method calls
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be
``bytes`` objects.
"""
_ensure_only_one_capture_fixture(request, 'capfdbinary')
if not hasattr(os, 'dup'):
pytest.skip("capfdbinary fixture needs os.dup function which is not available in this system")
with _install_capture_fixture_on_item(request, FDCaptureBinary) as fixture:
yield fixture
@contextlib.contextmanager
def _install_capture_fixture_on_item(request, capture_class):
"""
Context manager which creates a ``CaptureFixture`` instance and "installs" it on
the item/node of the given request. Used by ``capsys`` and ``capfd``.
The CaptureFixture is added as attribute of the item because it needs to accessed
by ``CaptureManager`` during its ``pytest_runtest_*`` hooks.
"""
request.node._capture_fixture = fixture = CaptureFixture(capture_class, request)
capmanager = request.config.pluginmanager.getplugin('capturemanager')
# need to active this fixture right away in case it is being used by another fixture (setup phase)
# if this fixture is being used only by a test function (call phase), then we wouldn't need this
# activation, but it doesn't hurt
capmanager.activate_fixture(request.node)
yield fixture
fixture.close()
del request.node._capture_fixture
class CaptureFixture(object):
"""
Object returned by :py:func:`capsys`, :py:func:`capsysbinary`, :py:func:`capfd` and :py:func:`capfdbinary`
fixtures.
"""
def __init__(self, captureclass, request):
self.captureclass = captureclass
self.request = request
@ -202,6 +292,10 @@ class CaptureFixture:
cap.stop_capturing()
def readouterr(self):
"""Read and return the captured output so far, resetting the internal buffer.
:return: captured content as a namedtuple with ``out`` and ``err`` string attributes
"""
try:
return self._capture.readouterr()
except AttributeError:
@ -209,12 +303,15 @@ class CaptureFixture:
@contextlib.contextmanager
def disabled(self):
"""Temporarily disables capture while inside the 'with' block."""
self._capture.suspend_capturing()
capmanager = self.request.config.pluginmanager.getplugin('capturemanager')
capmanager.suspendcapture_item(self.request.node, "call", in_=True)
capmanager.suspend_global_capture(item=None, in_=False)
try:
yield
finally:
capmanager.resumecapture()
capmanager.resume_global_capture()
self._capture.resume_capturing()
def safe_text_dupfile(f, mode, default_encoding="UTF8"):
@ -238,12 +335,13 @@ def safe_text_dupfile(f, mode, default_encoding="UTF8"):
class EncodedFile(object):
errors = "strict" # possibly needed by py3 code (issue555)
def __init__(self, buffer, encoding):
self.buffer = buffer
self.encoding = encoding
def write(self, obj):
if isinstance(obj, unicode):
if isinstance(obj, six.text_type):
obj = obj.encode(self.encoding, "replace")
self.buffer.write(obj)
@ -251,10 +349,18 @@ class EncodedFile(object):
data = ''.join(linelist)
self.write(data)
@property
def name(self):
"""Ensure that file.name is a string."""
return repr(self.buffer)
def __getattr__(self, name):
return getattr(object.__getattribute__(self, "buffer"), name)
CaptureResult = collections.namedtuple("CaptureResult", ["out", "err"])
class MultiCapture(object):
out = err = in_ = None
@ -315,14 +421,19 @@ class MultiCapture(object):
def readouterr(self):
""" return snapshot unicode value of stdout/stderr capturings. """
return (self.out.snap() if self.out is not None else "",
self.err.snap() if self.err is not None else "")
return CaptureResult(self.out.snap() if self.out is not None else "",
self.err.snap() if self.err is not None else "")
class NoCapture:
class NoCapture(object):
__init__ = start = done = suspend = resume = lambda *args: None
class FDCapture:
""" Capture IO to/from a given os-level filedescriptor. """
class FDCaptureBinary(object):
"""Capture IO to/from a given os-level filedescriptor.
snap() produces `bytes`
"""
def __init__(self, targetfd, tmpfile=None):
self.targetfd = targetfd
@ -361,17 +472,11 @@ class FDCapture:
self.syscapture.start()
def snap(self):
f = self.tmpfile
f.seek(0)
res = f.read()
if res:
enc = getattr(f, "encoding", None)
if enc and isinstance(res, bytes):
res = py.builtin._totext(res, enc, "replace")
f.truncate(0)
f.seek(0)
return res
return ''
self.tmpfile.seek(0)
res = self.tmpfile.read()
self.tmpfile.seek(0)
self.tmpfile.truncate()
return res
def done(self):
""" stop capturing, restore streams, return original capture file,
@ -380,7 +485,7 @@ class FDCapture:
os.dup2(targetfd_save, self.targetfd)
os.close(targetfd_save)
self.syscapture.done()
self.tmpfile.close()
_attempt_to_close_capture_file(self.tmpfile)
def suspend(self):
self.syscapture.suspend()
@ -392,12 +497,25 @@ class FDCapture:
def writeorg(self, data):
""" write to original file descriptor. """
if py.builtin._istext(data):
data = data.encode("utf8") # XXX use encoding of original stream
if isinstance(data, six.text_type):
data = data.encode("utf8") # XXX use encoding of original stream
os.write(self.targetfd_save, data)
class SysCapture:
class FDCapture(FDCaptureBinary):
"""Capture IO to/from a given os-level filedescriptor.
snap() produces text
"""
def snap(self):
res = FDCaptureBinary.snap(self)
enc = getattr(self.tmpfile, "encoding", None)
if enc and isinstance(res, bytes):
res = six.text_type(res, enc, "replace")
return res
class SysCapture(object):
def __init__(self, fd, tmpfile=None):
name = patchsysdict[fd]
self._old = getattr(sys, name)
@ -413,16 +531,15 @@ class SysCapture:
setattr(sys, self.name, self.tmpfile)
def snap(self):
f = self.tmpfile
res = f.getvalue()
f.truncate(0)
f.seek(0)
res = self.tmpfile.getvalue()
self.tmpfile.seek(0)
self.tmpfile.truncate()
return res
def done(self):
setattr(sys, self.name, self._old)
del self._old
self.tmpfile.close()
_attempt_to_close_capture_file(self.tmpfile)
def suspend(self):
setattr(sys, self.name, self._old)
@ -435,7 +552,15 @@ class SysCapture:
self._old.flush()
class DontReadFromInput:
class SysCaptureBinary(SysCapture):
def snap(self):
res = self.tmpfile.buffer.getvalue()
self.tmpfile.seek(0)
self.tmpfile.truncate()
return res
class DontReadFromInput(six.Iterator):
"""Temporary stub class. Ideally when stdin is accessed, the
capturing should be turned off, with possibly all data captured
so far sent to the screen. This should be configurable, though,
@ -449,7 +574,10 @@ class DontReadFromInput:
raise IOError("reading from stdin while output is captured")
readline = read
readlines = read
__iter__ = read
__next__ = read
def __iter__(self):
return self
def fileno(self):
raise UnsupportedOperation("redirected stdin is pseudofile, "
@ -463,12 +591,30 @@ class DontReadFromInput:
@property
def buffer(self):
if sys.version_info >= (3,0):
if sys.version_info >= (3, 0):
return self
else:
raise AttributeError('redirected stdin has no attribute buffer')
def _colorama_workaround():
"""
Ensure colorama is imported so that it attaches to the correct stdio
handles on Windows.
colorama uses the terminal on import time. So if something does the
first import of colorama while I/O capture is active, colorama will
fail in various ways.
"""
if not sys.platform.startswith('win32'):
return
try:
import colorama # noqa
except ImportError:
pass
def _readline_workaround():
"""
Ensure readline is imported so that it attaches to the correct stdio
@ -496,7 +642,7 @@ def _readline_workaround():
pass
def _py36_windowsconsoleio_workaround():
def _py36_windowsconsoleio_workaround(stream):
"""
Python 3.6 implemented unicode console handling for Windows. This works
by reading/writing to the raw console handle using
@ -513,13 +659,20 @@ def _py36_windowsconsoleio_workaround():
also means a different handle by replicating the logic in
"Py_lifecycle.c:initstdio/create_stdio".
:param stream: in practice ``sys.stdout`` or ``sys.stderr``, but given
here as parameter for unittesting purposes.
See https://github.com/pytest-dev/py/issues/103
"""
if not sys.platform.startswith('win32') or sys.version_info[:2] < (3, 6):
return
buffered = hasattr(sys.stdout.buffer, 'raw')
raw_stdout = sys.stdout.buffer.raw if buffered else sys.stdout.buffer
# bail out if ``stream`` doesn't seem like a proper ``io`` stream (#2666)
if not hasattr(stream, 'buffer'):
return
buffered = hasattr(stream.buffer, 'raw')
raw_stdout = stream.buffer.raw if buffered else stream.buffer
if not isinstance(raw_stdout, io._WindowsConsoleIO):
return
@ -540,3 +693,14 @@ def _py36_windowsconsoleio_workaround():
sys.__stdin__ = sys.stdin = _reopen_stdio(sys.stdin, 'rb')
sys.__stdout__ = sys.stdout = _reopen_stdio(sys.stdout, 'wb')
sys.__stderr__ = sys.stderr = _reopen_stdio(sys.stderr, 'wb')
def _attempt_to_close_capture_file(f):
"""Suppress IOError when closing the temporary file used for capturing streams in py27 (#2370)"""
if six.PY2:
try:
f.close()
except IOError:
pass
else:
f.close()

View File

@ -2,17 +2,17 @@
python version compatibility code
"""
from __future__ import absolute_import, division, print_function
import sys
import inspect
import types
import re
import codecs
import functools
import inspect
import re
import sys
import py
import _pytest
import _pytest
from _pytest.outcomes import TEST_OUTCOME
try:
import enum
@ -25,6 +25,12 @@ _PY3 = sys.version_info > (3, 0)
_PY2 = not _PY3
if _PY3:
from inspect import signature, Parameter as Parameter
else:
from funcsigs import signature, Parameter as Parameter
NoneType = type(None)
NOTSET = object()
@ -32,12 +38,18 @@ PY35 = sys.version_info[:2] >= (3, 5)
PY36 = sys.version_info[:2] >= (3, 6)
MODULE_NOT_FOUND_ERROR = 'ModuleNotFoundError' if PY36 else 'ImportError'
if hasattr(inspect, 'signature'):
def _format_args(func):
return str(inspect.signature(func))
if _PY3:
from collections.abc import MutableMapping as MappingMixin # noqa
from collections.abc import Sequence # noqa
else:
def _format_args(func):
return inspect.formatargspec(*inspect.getargspec(func))
# those raise DeprecationWarnings in Python >=3.7
from collections import MutableMapping as MappingMixin # noqa
from collections import Sequence # noqa
def _format_args(func):
return str(signature(func))
isfunction = inspect.isfunction
isclass = inspect.isclass
@ -59,16 +71,15 @@ def iscoroutinefunction(func):
which in turns also initializes the "logging" module as side-effect (see issue #8).
"""
return (getattr(func, '_is_coroutine', False) or
(hasattr(inspect, 'iscoroutinefunction') and inspect.iscoroutinefunction(func)))
(hasattr(inspect, 'iscoroutinefunction') and inspect.iscoroutinefunction(func)))
def getlocation(function, curdir):
import inspect
fn = py.path.local(inspect.getfile(function))
lineno = py.builtin._getcode(function).co_firstlineno
if fn.relto(curdir):
fn = fn.relto(curdir)
return "%s:%d" %(fn, lineno+1)
return "%s:%d" % (fn, lineno + 1)
def num_mock_patch_args(function):
@ -76,59 +87,72 @@ def num_mock_patch_args(function):
patchings = getattr(function, "patchings", None)
if not patchings:
return 0
mock = sys.modules.get("mock", sys.modules.get("unittest.mock", None))
if mock is not None:
mock_modules = [sys.modules.get("mock"), sys.modules.get("unittest.mock")]
if any(mock_modules):
sentinels = [m.DEFAULT for m in mock_modules if m is not None]
return len([p for p in patchings
if not p.attribute_name and p.new is mock.DEFAULT])
if not p.attribute_name and p.new in sentinels])
return len(patchings)
def getfuncargnames(function, startindex=None):
# XXX merge with main.py's varnames
#assert not isclass(function)
realfunction = function
while hasattr(realfunction, "__wrapped__"):
realfunction = realfunction.__wrapped__
if startindex is None:
startindex = inspect.ismethod(function) and 1 or 0
if realfunction != function:
startindex += num_mock_patch_args(function)
function = realfunction
if isinstance(function, functools.partial):
argnames = inspect.getargs(_pytest._code.getrawcode(function.func))[0]
partial = function
argnames = argnames[len(partial.args):]
if partial.keywords:
for kw in partial.keywords:
argnames.remove(kw)
else:
argnames = inspect.getargs(_pytest._code.getrawcode(function))[0]
defaults = getattr(function, 'func_defaults',
getattr(function, '__defaults__', None)) or ()
numdefaults = len(defaults)
if numdefaults:
return tuple(argnames[startindex:-numdefaults])
return tuple(argnames[startindex:])
def getfuncargnames(function, is_method=False, cls=None):
"""Returns the names of a function's mandatory arguments.
This should return the names of all function arguments that:
* Aren't bound to an instance or type as in instance or class methods.
* Don't have default values.
* Aren't bound with functools.partial.
* Aren't replaced with mocks.
The is_method and cls arguments indicate that the function should
be treated as a bound method even though it's not unless, only in
the case of cls, the function is a static method.
if sys.version_info[:2] == (2, 6):
def isclass(object):
""" Return true if the object is a class. Overrides inspect.isclass for
python 2.6 because it will return True for objects which always return
something on __getattr__ calls (see #1035).
Backport of https://hg.python.org/cpython/rev/35bf8f7a8edc
"""
return isinstance(object, (type, types.ClassType))
@RonnyPfannschmidt: This function should be refactored when we
revisit fixtures. The fixture mechanism should ask the node for
the fixture names, and not try to obtain directly from the
function object well after collection has occurred.
"""
# The parameters attribute of a Signature object contains an
# ordered mapping of parameter names to Parameter instances. This
# creates a tuple of the names of the parameters that don't have
# defaults.
arg_names = tuple(p.name for p in signature(function).parameters.values()
if (p.kind is Parameter.POSITIONAL_OR_KEYWORD or
p.kind is Parameter.KEYWORD_ONLY) and
p.default is Parameter.empty)
# If this function should be treated as a bound method even though
# it's passed as an unbound method or function, remove the first
# parameter name.
if (is_method or
(cls and not isinstance(cls.__dict__.get(function.__name__, None),
staticmethod))):
arg_names = arg_names[1:]
# Remove any names that will be replaced with mocks.
if hasattr(function, "__wrapped__"):
arg_names = arg_names[num_mock_patch_args(function):]
return arg_names
if _PY3:
import codecs
imap = map
STRING_TYPES = bytes, str
UNICODE_TYPES = str,
def _escape_strings(val):
if PY35:
def _bytes_to_ascii(val):
return val.decode('ascii', 'backslashreplace')
else:
def _bytes_to_ascii(val):
if val:
# source: http://goo.gl/bGsnwC
encoded_bytes, _ = codecs.escape_encode(val)
return encoded_bytes.decode('ascii')
else:
# empty bytes crashes codecs.escape_encode (#1087)
return ''
def ascii_escaped(val):
"""If val is pure ascii, returns it as a str(). Otherwise, escapes
bytes objects into a sequence of escaped bytes:
@ -147,22 +171,14 @@ if _PY3:
"""
if isinstance(val, bytes):
if val:
# source: http://goo.gl/bGsnwC
encoded_bytes, _ = codecs.escape_encode(val)
return encoded_bytes.decode('ascii')
else:
# empty bytes crashes codecs.escape_encode (#1087)
return ''
return _bytes_to_ascii(val)
else:
return val.encode('unicode_escape').decode('ascii')
else:
STRING_TYPES = bytes, str, unicode
UNICODE_TYPES = unicode,
from itertools import imap # NOQA
def _escape_strings(val):
def ascii_escaped(val):
"""In py2 bytes and str are the same type, so return if it's a bytes
object, return it unchanged if it is a full ascii string,
otherwise escape it into its binary form.
@ -215,21 +231,20 @@ def getimfunc(func):
try:
return func.__func__
except AttributeError:
try:
return func.im_func
except AttributeError:
return func
return func
def safe_getattr(object, name, default):
""" Like getattr but return default upon any Exception.
""" Like getattr but return default upon any Exception or any OutcomeException.
Attribute access can potentially fail for 'evil' Python objects.
See issue #214.
It catches OutcomeException because of #2490 (issue #580), new outcomes are derived from BaseException
instead of Exception (for more details check #2707)
"""
try:
return getattr(object, name, default)
except Exception:
except TEST_OUTCOME:
return default
@ -283,7 +298,15 @@ def _setup_collect_fakemodule():
if _PY2:
from py.io import TextIO as CaptureIO
# Without this the test_dupfile_on_textio will fail, otherwise CaptureIO could directly inherit from StringIO.
from py.io import TextIO
class CaptureIO(TextIO):
@property
def encoding(self):
return getattr(self, '_encoding', 'UTF-8')
else:
import io
@ -297,6 +320,7 @@ else:
def getvalue(self):
return self.buffer.getvalue().decode('UTF-8')
class FuncargnamesCompatAttr(object):
""" helper class so that Metafunc, Function and FixtureRequest
don't need to each define the "funcargnames" compatibility attribute.

View File

@ -5,15 +5,18 @@ import shlex
import traceback
import types
import warnings
import copy
import six
import py
# DON't import pytest here because it causes import cycle troubles
import sys
import os
from _pytest.outcomes import Skipped
import _pytest._code
import _pytest.hookspec # the extension point definitions
import _pytest.assertion
from _pytest._pluggy import PluginManager, HookimplMarker, HookspecMarker
from pluggy import PluginManager, HookimplMarker, HookspecMarker
from _pytest.compat import safe_str
hookimpl = HookimplMarker("pytest")
@ -51,7 +54,7 @@ def main(args=None, plugins=None):
tw = py.io.TerminalWriter(sys.stderr)
for line in traceback.format_exception(*e.excinfo):
tw.line(line.rstrip(), red=True)
tw.line("ERROR: could not load %s\n" % (e.path), red=True)
tw.line("ERROR: could not load %s\n" % (e.path,), red=True)
return 4
else:
try:
@ -59,11 +62,13 @@ def main(args=None, plugins=None):
finally:
config._ensure_unconfigure()
except UsageError as e:
tw = py.io.TerminalWriter(sys.stderr)
for msg in e.args:
sys.stderr.write("ERROR: %s\n" %(msg,))
tw.line("ERROR: {}\n".format(msg), red=True)
return 4
class cmdline: # compatibility namespace
class cmdline(object): # NOQA compatibility namespace
main = staticmethod(main)
@ -99,26 +104,18 @@ def directory_arg(path, optname):
return path
_preinit = []
default_plugins = (
"mark main terminal runner python fixtures debugging unittest capture skipping "
"tmpdir monkeypatch recwarn pastebin helpconfig nose assertion "
"junitxml resultlog doctest cacheprovider freeze_support "
"setuponly setupplan warnings").split()
"mark main terminal runner python fixtures debugging unittest capture skipping "
"tmpdir monkeypatch recwarn pastebin helpconfig nose assertion "
"junitxml resultlog doctest cacheprovider freeze_support "
"setuponly setupplan warnings logging").split()
builtin_plugins = set(default_plugins)
builtin_plugins.add("pytester")
def _preloadplugins():
assert not _preinit
_preinit.append(get_config())
def get_config():
if _preinit:
return _preinit.pop(0)
# subsequent calls to main will create a fresh instance
pluginmanager = PytestPluginManager()
config = Config(pluginmanager)
@ -126,6 +123,7 @@ def get_config():
pluginmanager.import_plugin(spec)
return config
def get_plugin_manager():
"""
Obtain a new instance of the
@ -137,6 +135,7 @@ def get_plugin_manager():
"""
return get_config().pluginmanager
def _prepareconfig(args=None, plugins=None):
warning = None
if args is None:
@ -154,14 +153,14 @@ def _prepareconfig(args=None, plugins=None):
try:
if plugins:
for plugin in plugins:
if isinstance(plugin, py.builtin._basestring):
if isinstance(plugin, six.string_types):
pluginmanager.consider_pluginarg(plugin)
else:
pluginmanager.register(plugin)
if warning:
config.warn('C1', warning)
return pluginmanager.hook.pytest_cmdline_parse(
pluginmanager=pluginmanager, args=args)
pluginmanager=pluginmanager, args=args)
except BaseException:
config._ensure_unconfigure()
raise
@ -169,13 +168,14 @@ def _prepareconfig(args=None, plugins=None):
class PytestPluginManager(PluginManager):
"""
Overwrites :py:class:`pluggy.PluginManager <_pytest.vendored_packages.pluggy.PluginManager>` to add pytest-specific
Overwrites :py:class:`pluggy.PluginManager <pluggy.PluginManager>` to add pytest-specific
functionality:
* loading plugins from the command line, ``PYTEST_PLUGIN`` env variable and
* loading plugins from the command line, ``PYTEST_PLUGINS`` env variable and
``pytest_plugins`` global variables found in plugins being loaded;
* ``conftest.py`` loading during start-up;
"""
def __init__(self):
super(PytestPluginManager, self).__init__("pytest", implprefix="pytest_")
self._conftest_plugins = set()
@ -201,12 +201,15 @@ class PytestPluginManager(PluginManager):
# Config._consider_importhook will set a real object if required.
self.rewrite_hook = _pytest.assertion.DummyRewriteHook()
# Used to know when we are importing conftests after the pytest_configure stage
self._configured = False
def addhooks(self, module_or_class):
"""
.. deprecated:: 2.8
Use :py:meth:`pluggy.PluginManager.add_hookspecs <_pytest.vendored_packages.pluggy.PluginManager.add_hookspecs>` instead.
Use :py:meth:`pluggy.PluginManager.add_hookspecs <PluginManager.add_hookspecs>`
instead.
"""
warning = dict(code="I2",
fslocation=_pytest._code.getfslineno(sys._getframe(1)),
@ -235,7 +238,7 @@ class PytestPluginManager(PluginManager):
def parse_hookspec_opts(self, module_or_class, name):
opts = super(PytestPluginManager, self).parse_hookspec_opts(
module_or_class, name)
module_or_class, name)
if opts is None:
method = getattr(module_or_class, name)
if name.startswith("pytest_"):
@ -243,22 +246,16 @@ class PytestPluginManager(PluginManager):
"historic": hasattr(method, "historic")}
return opts
def _verify_hook(self, hook, hookmethod):
super(PytestPluginManager, self)._verify_hook(hook, hookmethod)
if "__multicall__" in hookmethod.argnames:
fslineno = _pytest._code.getfslineno(hookmethod.function)
warning = dict(code="I1",
fslocation=fslineno,
nodeid=None,
message="%r hook uses deprecated __multicall__ "
"argument" % (hook.name))
self._warn(warning)
def register(self, plugin, name=None):
if name in ['pytest_catchlog', 'pytest_capturelog']:
self._warn('{0} plugin has been merged into the core, '
'please remove it from your requirements.'.format(
name.replace('_', '-')))
return
ret = super(PytestPluginManager, self).register(plugin, name)
if ret:
self.hook.pytest_plugin_registered.call_historic(
kwargs=dict(plugin=plugin, manager=self))
kwargs=dict(plugin=plugin, manager=self))
if isinstance(plugin, types.ModuleType):
self.consider_module(plugin)
@ -276,11 +273,12 @@ class PytestPluginManager(PluginManager):
# XXX now that the pluginmanager exposes hookimpl(tryfirst...)
# we should remove tryfirst/trylast as markers
config.addinivalue_line("markers",
"tryfirst: mark a hook implementation function such that the "
"plugin machinery will try to call it first/as early as possible.")
"tryfirst: mark a hook implementation function such that the "
"plugin machinery will try to call it first/as early as possible.")
config.addinivalue_line("markers",
"trylast: mark a hook implementation function such that the "
"plugin machinery will try to call it last/as late as possible.")
"trylast: mark a hook implementation function such that the "
"plugin machinery will try to call it last/as late as possible.")
self._configured = True
def _warn(self, message):
kwargs = message if isinstance(message, dict) else {
@ -304,7 +302,7 @@ class PytestPluginManager(PluginManager):
"""
current = py.path.local()
self._confcutdir = current.join(namespace.confcutdir, abs=True) \
if namespace.confcutdir else None
if namespace.confcutdir else None
self._noconftest = namespace.noconftest
testpaths = namespace.file_or_dir
foundanchor = False
@ -315,7 +313,7 @@ class PytestPluginManager(PluginManager):
if i != -1:
path = path[:i]
anchor = current.join(path, abs=1)
if exists(anchor): # we found some file object
if exists(anchor): # we found some file object
self._try_load_conftest(anchor)
foundanchor = True
if not foundanchor:
@ -371,6 +369,9 @@ class PytestPluginManager(PluginManager):
_ensure_removed_sysmodule(conftestpath.purebasename)
try:
mod = conftestpath.pyimport()
if hasattr(mod, 'pytest_plugins') and self._configured:
from _pytest.deprecated import PYTEST_PLUGINS_FROM_NON_TOP_LEVEL_CONFTEST
warnings.warn(PYTEST_PLUGINS_FROM_NON_TOP_LEVEL_CONFTEST)
except Exception:
raise ConftestImportFailure(conftestpath, sys.exc_info())
@ -382,7 +383,7 @@ class PytestPluginManager(PluginManager):
if path and path.relto(dirpath) or path == dirpath:
assert mod not in mods
mods.append(mod)
self.trace("loaded conftestmodule %r" %(mod))
self.trace("loaded conftestmodule %r" % (mod))
self.consider_conftest(mod)
return mod
@ -392,7 +393,7 @@ class PytestPluginManager(PluginManager):
#
def consider_preparse(self, args):
for opt1,opt2 in zip(args, args[1:]):
for opt1, opt2 in zip(args, args[1:]):
if opt1 == "-p":
self.consider_pluginarg(opt2)
@ -424,9 +425,9 @@ class PytestPluginManager(PluginManager):
# "terminal" or "capture". Those plugins are registered under their
# basename for historic purposes but must be imported with the
# _pytest prefix.
assert isinstance(modname, (py.builtin.text, str)), "module name as text required, got %r" % modname
assert isinstance(modname, (six.text_type, str)), "module name as text required, got %r" % modname
modname = str(modname)
if self.get_plugin(modname) is not None:
if self.is_blocked(modname) or self.get_plugin(modname) is not None:
return
if modname in builtin_plugins:
importspec = "_pytest." + modname
@ -436,17 +437,14 @@ class PytestPluginManager(PluginManager):
try:
__import__(importspec)
except ImportError as e:
new_exc = ImportError('Error importing plugin "%s": %s' % (modname, safe_str(e.args[0])))
# copy over name and path attributes
for attr in ('name', 'path'):
if hasattr(e, attr):
setattr(new_exc, attr, getattr(e, attr))
raise new_exc
except Exception as e:
import pytest
if not hasattr(pytest, 'skip') or not isinstance(e, pytest.skip.Exception):
raise
self._warn("skipped plugin %r: %s" %((modname, e.msg)))
new_exc_type = ImportError
new_exc_message = 'Error importing plugin "%s": %s' % (modname, safe_str(e.args[0]))
new_exc = new_exc_type(new_exc_message)
six.reraise(new_exc_type, new_exc, sys.exc_info()[2])
except Skipped as e:
self._warn("skipped plugin %r: %s" % ((modname, e.msg)))
else:
mod = sys.modules[importspec]
self.register(mod, modname)
@ -470,7 +468,7 @@ def _get_plugin_specs_as_list(specs):
return []
class Parser:
class Parser(object):
""" Parser for command line arguments and ini-file values.
:ivar extra_info: dict of generic param -> value to display in case
@ -511,7 +509,7 @@ class Parser:
for i, grp in enumerate(self._groups):
if grp.name == after:
break
self._groups.insert(i+1, group)
self._groups.insert(i + 1, group)
return group
def addoption(self, *opts, **attrs):
@ -549,7 +547,7 @@ class Parser:
a = option.attrs()
arggroup.add_argument(*n, **a)
# bash like autocompletion for dirs (appending '/')
optparser.add_argument(FILE_OR_DIR, nargs='*').completer=filescompleter
optparser.add_argument(FILE_OR_DIR, nargs='*').completer = filescompleter
return optparser
def parse_setoption(self, args, option, namespace=None):
@ -605,7 +603,7 @@ class ArgumentError(Exception):
return self.msg
class Argument:
class Argument(object):
"""class that mimics the necessary behaviour of optparse.Option
its currently a least effort implementation
@ -637,7 +635,7 @@ class Argument:
pass
else:
# this might raise a keyerror as well, don't want to catch that
if isinstance(typ, py.builtin._basestring):
if isinstance(typ, six.string_types):
if typ == 'choice':
warnings.warn(
'type argument to addoption() is a string %r.'
@ -693,7 +691,7 @@ class Argument:
if self._attrs.get('help'):
a = self._attrs['help']
a = a.replace('%default', '%(default)s')
#a = a.replace('%prog', '%(prog)s')
# a = a.replace('%prog', '%(prog)s')
self._attrs['help'] = a
return self._attrs
@ -735,7 +733,7 @@ class Argument:
return 'Argument({0})'.format(', '.join(args))
class OptionGroup:
class OptionGroup(object):
def __init__(self, name, description="", parser=None):
self.name = name
self.description = description
@ -777,7 +775,7 @@ class MyOptionParser(argparse.ArgumentParser):
extra_info = {}
self._parser = parser
argparse.ArgumentParser.__init__(self, usage=parser._usage,
add_help=False, formatter_class=DropShorterLongHelpFormatter)
add_help=False, formatter_class=DropShorterLongHelpFormatter)
# extra_info is a dict of (param -> value) to display if there's
# an usage error to provide more contextual information to the user
self.extra_info = extra_info
@ -805,9 +803,10 @@ class DropShorterLongHelpFormatter(argparse.HelpFormatter):
- shortcut if there are only two options and one of them is a short one
- cache result on action object as this is called at least 2 times
"""
def _format_action_invocation(self, action):
orgstr = argparse.HelpFormatter._format_action_invocation(self, action)
if orgstr and orgstr[0] != '-': # only optional arguments
if orgstr and orgstr[0] != '-': # only optional arguments
return orgstr
res = getattr(action, '_formatted_action_invocation', None)
if res:
@ -818,7 +817,7 @@ class DropShorterLongHelpFormatter(argparse.HelpFormatter):
action._formatted_action_invocation = orgstr
return orgstr
return_list = []
option_map = getattr(action, 'map_long_option', {})
option_map = getattr(action, 'map_long_option', {})
if option_map is None:
option_map = {}
short_long = {}
@ -836,7 +835,7 @@ class DropShorterLongHelpFormatter(argparse.HelpFormatter):
short_long[shortened] = xxoption
# now short_long has been filled out to the longest with dashes
# **and** we keep the right option ordering from add_argument
for option in options: #
for option in options:
if len(option) == 2 or option[2] == ' ':
return_list.append(option)
if option[2:] == short_long.get(option.replace('-', '')):
@ -845,23 +844,14 @@ class DropShorterLongHelpFormatter(argparse.HelpFormatter):
return action._formatted_action_invocation
def _ensure_removed_sysmodule(modname):
try:
del sys.modules[modname]
except KeyError:
pass
class CmdOptions(object):
""" holds cmdline options as attributes."""
def __init__(self, values=()):
self.__dict__.update(values)
def __repr__(self):
return "<CmdOptions %r>" %(self.__dict__,)
def copy(self):
return CmdOptions(self.__dict__)
class Notset:
class Notset(object):
def __repr__(self):
return "<NOTSET>"
@ -870,13 +860,25 @@ notset = Notset()
FILE_OR_DIR = 'file_or_dir'
def _iter_rewritable_modules(package_files):
for fn in package_files:
is_simple_module = '/' not in fn and fn.endswith('.py')
is_package = fn.count('/') == 1 and fn.endswith('__init__.py')
if is_simple_module:
module_name, _ = os.path.splitext(fn)
yield module_name
elif is_package:
package_name = os.path.dirname(fn)
yield package_name
class Config(object):
""" access to configuration values, pluginmanager and plugin hooks. """
def __init__(self, pluginmanager):
#: access to command line option as attributes.
#: (deprecated), use :py:func:`getoption() <_pytest.config.Config.getoption>` instead
self.option = CmdOptions()
self.option = argparse.Namespace()
_a = FILE_OR_DIR
self._parser = Parser(
usage="%%(prog)s [options] [%s] [%s] [...]" % (_a, _a),
@ -940,14 +942,14 @@ class Config(object):
else:
style = "native"
excrepr = excinfo.getrepr(funcargs=True,
showlocals=getattr(option, 'showlocals', False),
style=style,
)
showlocals=getattr(option, 'showlocals', False),
style=style,
)
res = self.hook.pytest_internalerror(excrepr=excrepr,
excinfo=excinfo)
if not py.builtin.any(res):
if not any(res):
for line in str(excrepr).split("\n"):
sys.stderr.write("INTERNALERROR> %s\n" %line)
sys.stderr.write("INTERNALERROR> %s\n" % line)
sys.stderr.flush()
def cwd_relative_nodeid(self, nodeid):
@ -980,8 +982,9 @@ class Config(object):
self.pluginmanager._set_initial_conftests(early_config.known_args_namespace)
def _initini(self, args):
ns, unknown_args = self._parser.parse_known_and_unknown_args(args, namespace=self.option.copy())
r = determine_setup(ns.inifilename, ns.file_or_dir + unknown_args, warnfunc=self.warn)
ns, unknown_args = self._parser.parse_known_and_unknown_args(args, namespace=copy.copy(self.option))
r = determine_setup(ns.inifilename, ns.file_or_dir + unknown_args, warnfunc=self.warn,
rootdir_cmd_arg=ns.rootdir or None)
self.rootdir, self.inifile, self.inicfg = r
self._parser.extra_info['rootdir'] = self.rootdir
self._parser.extra_info['inifile'] = self.inifile
@ -991,10 +994,10 @@ class Config(object):
self._override_ini = ns.override_ini or ()
def _consider_importhook(self, args):
"""Install the PEP 302 import hook if using assertion re-writing.
"""Install the PEP 302 import hook if using assertion rewriting.
Needs to parse the --assert=<mode> option from the commandline
and find all the installed plugins to mark them for re-writing
and find all the installed plugins to mark them for rewriting
by the importhook.
"""
ns, unknown_args = self._parser.parse_known_and_unknown_args(args)
@ -1006,7 +1009,7 @@ class Config(object):
mode = 'plain'
else:
self._mark_plugins_for_rewrite(hook)
self._warn_about_missing_assertion(mode)
_warn_about_missing_assertion(mode)
def _mark_plugins_for_rewrite(self, hook):
"""
@ -1030,51 +1033,28 @@ class Config(object):
for entry in entrypoint.dist._get_metadata(metadata)
)
for fn in package_files:
is_simple_module = os.sep not in fn and fn.endswith('.py')
is_package = fn.count(os.sep) == 1 and fn.endswith('__init__.py')
if is_simple_module:
module_name, ext = os.path.splitext(fn)
hook.mark_rewrite(module_name)
elif is_package:
package_name = os.path.dirname(fn)
hook.mark_rewrite(package_name)
def _warn_about_missing_assertion(self, mode):
try:
assert False
except AssertionError:
pass
else:
if mode == 'plain':
sys.stderr.write("WARNING: ASSERTIONS ARE NOT EXECUTED"
" and FAILING TESTS WILL PASS. Are you"
" using python -O?")
else:
sys.stderr.write("WARNING: assertions not in test modules or"
" plugins will be ignored"
" because assert statements are not executed "
"by the underlying Python interpreter "
"(are you using python -O?)\n")
for name in _iter_rewritable_modules(package_files):
hook.mark_rewrite(name)
def _preparse(self, args, addopts=True):
self._initini(args)
if addopts:
args[:] = shlex.split(os.environ.get('PYTEST_ADDOPTS', '')) + args
self._initini(args)
if addopts:
args[:] = self.getini("addopts") + args
self._checkversion()
self._consider_importhook(args)
self.pluginmanager.consider_preparse(args)
self.pluginmanager.load_setuptools_entrypoints('pytest11')
self.pluginmanager.consider_env()
self.known_args_namespace = ns = self._parser.parse_known_args(args, namespace=self.option.copy())
confcutdir = self.known_args_namespace.confcutdir
self.known_args_namespace = ns = self._parser.parse_known_args(
args, namespace=copy.copy(self.option))
if self.known_args_namespace.confcutdir is None and self.inifile:
confcutdir = py.path.local(self.inifile).dirname
self.known_args_namespace.confcutdir = confcutdir
try:
self.hook.pytest_load_initial_conftests(early_config=self,
args=args, parser=self._parser)
args=args, parser=self._parser)
except ConftestImportFailure:
e = sys.exc_info()[1]
if ns.help or ns.version:
@ -1092,17 +1072,17 @@ class Config(object):
myver = pytest.__version__.split(".")
if myver < ver:
raise pytest.UsageError(
"%s:%d: requires pytest-%s, actual pytest-%s'" %(
self.inicfg.config.path, self.inicfg.lineof('minversion'),
minver, pytest.__version__))
"%s:%d: requires pytest-%s, actual pytest-%s'" % (
self.inicfg.config.path, self.inicfg.lineof('minversion'),
minver, pytest.__version__))
def parse(self, args, addopts=True):
# parse given cmdline arguments into this config object.
assert not hasattr(self, 'args'), (
"can only parse cmdline args at most once per Config object")
"can only parse cmdline args at most once per Config object")
self._origargs = args
self.hook.pytest_addhooks.call_historic(
kwargs=dict(pluginmanager=self.pluginmanager))
kwargs=dict(pluginmanager=self.pluginmanager))
self._preparse(args, addopts=addopts)
# XXX deprecated hook:
self.hook.pytest_cmdline_preparse(config=self, args=args)
@ -1125,7 +1105,7 @@ class Config(object):
the first line in its value. """
x = self.getini(name)
assert isinstance(x, list)
x.append(line) # modifies the cached list inline
x.append(line) # modifies the cached list inline
def getini(self, name):
""" return configuration value from an :ref:`ini file <inifiles>`. If the
@ -1142,7 +1122,7 @@ class Config(object):
try:
description, type, default = self._parser._inidict[name]
except KeyError:
raise ValueError("unknown configuration value: %r" %(name,))
raise ValueError("unknown configuration value: %r" % (name,))
value = self._get_override_ini_value(name)
if value is None:
try:
@ -1155,10 +1135,10 @@ class Config(object):
return []
if type == "pathlist":
dp = py.path.local(self.inicfg.config.path).dirpath()
l = []
values = []
for relpath in shlex.split(value):
l.append(dp.join(relpath, abs=True))
return l
values.append(dp.join(relpath, abs=True))
return values
elif type == "args":
return shlex.split(value)
elif type == "linelist":
@ -1175,26 +1155,25 @@ class Config(object):
except KeyError:
return None
modpath = py.path.local(mod.__file__).dirpath()
l = []
values = []
for relroot in relroots:
if not isinstance(relroot, py.path.local):
relroot = relroot.replace("/", py.path.local.sep)
relroot = modpath.join(relroot, abs=True)
l.append(relroot)
return l
values.append(relroot)
return values
def _get_override_ini_value(self, name):
value = None
# override_ini is a list of list, to support both -o foo1=bar1 foo2=bar2 and
# and -o foo1=bar1 -o foo2=bar2 options
# always use the last item if multiple value set for same ini-name,
# override_ini is a list of "ini=value" options
# always use the last item if multiple values are set for same ini-name,
# e.g. -o foo=bar1 -o foo=bar2 will set foo to bar2
for ini_config_list in self._override_ini:
for ini_config in ini_config_list:
try:
(key, user_ini_value) = ini_config.split("=", 1)
except ValueError:
raise UsageError("-o/--override-ini expects option=value style.")
for ini_config in self._override_ini:
try:
key, user_ini_value = ini_config.split("=", 1)
except ValueError:
raise UsageError("-o/--override-ini expects option=value style.")
else:
if key == name:
value = user_ini_value
return value
@ -1219,7 +1198,7 @@ class Config(object):
return default
if skip:
import pytest
pytest.skip("no %r option found" %(name,))
pytest.skip("no %r option found" % (name,))
raise ValueError("no option named %r" % (name,))
def getvalue(self, name, path=None):
@ -1230,12 +1209,37 @@ class Config(object):
""" (deprecated, use getoption(skip=True)) """
return self.getoption(name, skip=True)
def _assertion_supported():
try:
assert False
except AssertionError:
return True
else:
return False
def _warn_about_missing_assertion(mode):
if not _assertion_supported():
if mode == 'plain':
sys.stderr.write("WARNING: ASSERTIONS ARE NOT EXECUTED"
" and FAILING TESTS WILL PASS. Are you"
" using python -O?")
else:
sys.stderr.write("WARNING: assertions not in test modules or"
" plugins will be ignored"
" because assert statements are not executed "
"by the underlying Python interpreter "
"(are you using python -O?)\n")
def exists(path, ignore=EnvironmentError):
try:
return path.check()
except ignore:
return False
def getcfg(args, warnfunc=None):
"""
Search the list of arguments for a valid ini-file for pytest,
@ -1246,7 +1250,7 @@ def getcfg(args, warnfunc=None):
This parameter should be removed when pytest
adopts standard deprecation warnings (#1804).
"""
from _pytest.deprecated import SETUP_CFG_PYTEST
from _pytest.deprecated import CFG_PYTEST_SECTION
inibasenames = ["pytest.ini", "tox.ini", "setup.cfg"]
args = [x for x in args if not str(x).startswith("-")]
if not args:
@ -1260,7 +1264,7 @@ def getcfg(args, warnfunc=None):
iniconfig = py.iniconfig.IniConfig(p)
if 'pytest' in iniconfig.sections:
if inibasename == 'setup.cfg' and warnfunc:
warnfunc('C1', SETUP_CFG_PYTEST)
warnfunc('C1', CFG_PYTEST_SECTION.format(filename=inibasename))
return base, p, iniconfig['pytest']
if inibasename == 'setup.cfg' and 'tool:pytest' in iniconfig.sections:
return base, p, iniconfig['tool:pytest']
@ -1319,14 +1323,22 @@ def get_dirs_from_args(args):
]
def determine_setup(inifile, args, warnfunc=None):
def determine_setup(inifile, args, warnfunc=None, rootdir_cmd_arg=None):
dirs = get_dirs_from_args(args)
if inifile:
iniconfig = py.iniconfig.IniConfig(inifile)
try:
inicfg = iniconfig["pytest"]
except KeyError:
inicfg = None
is_cfg_file = str(inifile).endswith('.cfg')
# TODO: [pytest] section in *.cfg files is depricated. Need refactoring.
sections = ['tool:pytest', 'pytest'] if is_cfg_file else ['pytest']
for section in sections:
try:
inicfg = iniconfig[section]
if is_cfg_file and section == 'pytest' and warnfunc:
from _pytest.deprecated import CFG_PYTEST_SECTION
warnfunc('C1', CFG_PYTEST_SECTION.format(filename=str(inifile)))
break
except KeyError:
inicfg = None
rootdir = get_common_ancestor(dirs)
else:
ancestor = get_common_ancestor(dirs)
@ -1339,9 +1351,14 @@ def determine_setup(inifile, args, warnfunc=None):
rootdir, inifile, inicfg = getcfg(dirs, warnfunc=warnfunc)
if rootdir is None:
rootdir = get_common_ancestor([py.path.local(), ancestor])
is_fs_root = os.path.splitdrive(str(rootdir))[1] == os.sep
is_fs_root = os.path.splitdrive(str(rootdir))[1] == '/'
if is_fs_root:
rootdir = ancestor
if rootdir_cmd_arg:
rootdir_abs_path = py.path.local(os.path.expandvars(rootdir_cmd_arg))
if not os.path.isdir(str(rootdir_abs_path)):
raise UsageError("Directory '{}' not found. Check your '--rootdir' option.".format(rootdir_abs_path))
rootdir = rootdir_abs_path
return rootdir, inifile, inicfg or {}
@ -1361,7 +1378,7 @@ def setns(obj, dic):
else:
setattr(obj, name, value)
obj.__all__.append(name)
#if obj != pytest:
# if obj != pytest:
# pytest.__all__.append(name)
setattr(pytest, name, value)

View File

@ -2,7 +2,14 @@
from __future__ import absolute_import, division, print_function
import pdb
import sys
import os
from doctest import UnexpectedException
try:
from builtins import breakpoint # noqa
SUPPORTS_BREAKPOINT_BUILTIN = True
except ImportError:
SUPPORTS_BREAKPOINT_BUILTIN = False
def pytest_addoption(parser):
@ -27,12 +34,20 @@ def pytest_configure(config):
if config.getvalue("usepdb"):
config.pluginmanager.register(PdbInvoke(), 'pdbinvoke')
# Use custom Pdb class set_trace instead of default Pdb on breakpoint() call
if SUPPORTS_BREAKPOINT_BUILTIN:
_environ_pythonbreakpoint = os.environ.get('PYTHONBREAKPOINT', '')
if _environ_pythonbreakpoint == '':
sys.breakpointhook = pytestPDB.set_trace
old = (pdb.set_trace, pytestPDB._pluginmanager)
def fin():
pdb.set_trace, pytestPDB._pluginmanager = old
pytestPDB._config = None
pytestPDB._pdb_cls = pdb.Pdb
if SUPPORTS_BREAKPOINT_BUILTIN:
sys.breakpointhook = sys.__breakpointhook__
pdb.set_trace = pytestPDB.set_trace
pytestPDB._pluginmanager = config.pluginmanager
@ -40,7 +55,8 @@ def pytest_configure(config):
pytestPDB._pdb_cls = pdb_cls
config._cleanup.append(fin)
class pytestPDB:
class pytestPDB(object):
""" Pseudo PDB that defers to the real pdb. """
_pluginmanager = None
_config = None
@ -54,7 +70,7 @@ class pytestPDB:
if cls._pluginmanager is not None:
capman = cls._pluginmanager.getplugin("capturemanager")
if capman:
capman.suspendcapture(in_=True)
capman.suspend_global_capture(in_=True)
tw = _pytest.config.create_terminal_writer(cls._config)
tw.line()
tw.sep(">", "PDB set_trace (IO-capturing turned off)")
@ -62,11 +78,11 @@ class pytestPDB:
cls._pdb_cls().set_trace(frame)
class PdbInvoke:
class PdbInvoke(object):
def pytest_exception_interact(self, node, call, report):
capman = node.config.pluginmanager.getplugin("capturemanager")
if capman:
out, err = capman.suspendcapture(in_=True)
out, err = capman.suspend_global_capture(in_=True)
sys.stdout.write(out)
sys.stdout.write(err)
_enter_pdb(node, call.excinfo, report)
@ -85,6 +101,18 @@ def _enter_pdb(node, excinfo, rep):
# for not completely clear reasons.
tw = node.config.pluginmanager.getplugin("terminalreporter")._tw
tw.line()
showcapture = node.config.option.showcapture
for sectionname, content in (('stdout', rep.capstdout),
('stderr', rep.capstderr),
('log', rep.caplog)):
if showcapture in (sectionname, 'all') and content:
tw.sep(">", "captured " + sectionname)
if content[-1:] == "\n":
content = content[:-1]
tw.line(content)
tw.sep(">", "traceback")
rep.toterminal(tw)
tw.sep(">", "entering PDB")
@ -95,10 +123,9 @@ def _enter_pdb(node, excinfo, rep):
def _postmortem_traceback(excinfo):
# A doctest.UnexpectedException is not useful for post_mortem.
# Use the underlying exception instead:
from doctest import UnexpectedException
if isinstance(excinfo.value, UnexpectedException):
# A doctest.UnexpectedException is not useful for post_mortem.
# Use the underlying exception instead:
return excinfo.value.exc_info[2]
else:
return excinfo._excinfo[2]

View File

@ -13,7 +13,7 @@ class RemovedInPytest4Warning(DeprecationWarning):
MAIN_STR_ARGS = 'passing a string to pytest.main() is deprecated, ' \
'pass a list of arguments instead.'
'pass a list of arguments instead.'
YIELD_TESTS = 'yield tests are deprecated, and scheduled to be removed in pytest 4.0'
@ -22,18 +22,44 @@ FUNCARG_PREFIX = (
'and scheduled to be removed in pytest 4.0. '
'Please remove the prefix and use the @pytest.fixture decorator instead.')
SETUP_CFG_PYTEST = '[pytest] section in setup.cfg files is deprecated, use [tool:pytest] instead.'
CFG_PYTEST_SECTION = '[pytest] section in {filename} files is deprecated, use [tool:pytest] instead.'
GETFUNCARGVALUE = "use of getfuncargvalue is deprecated, use getfixturevalue"
RESULT_LOG = '--result-log is deprecated and scheduled for removal in pytest 4.0'
RESULT_LOG = (
'--result-log is deprecated and scheduled for removal in pytest 4.0.\n'
'See https://docs.pytest.org/en/latest/usage.html#creating-resultlog-format-files for more information.'
)
MARK_INFO_ATTRIBUTE = RemovedInPytest4Warning(
"MarkInfo objects are deprecated as they contain the merged marks"
"MarkInfo objects are deprecated as they contain the merged marks.\n"
"Please use node.iter_markers to iterate over markers correctly"
)
MARK_PARAMETERSET_UNPACKING = RemovedInPytest4Warning(
"Applying marks directly to parameters is deprecated,"
" please use pytest.param(..., marks=...) instead.\n"
"For more details, see: https://docs.pytest.org/en/latest/parametrize.html"
)
)
RECORD_XML_PROPERTY = (
'Fixture renamed from "record_xml_property" to "record_property" as user '
'properties are now available to all reporters.\n'
'"record_xml_property" is now deprecated.'
)
COLLECTOR_MAKEITEM = RemovedInPytest4Warning(
"pycollector makeitem was removed "
"as it is an accidentially leaked internal api"
)
METAFUNC_ADD_CALL = (
"Metafunc.addcall is deprecated and scheduled to be removed in pytest 4.0.\n"
"Please use Metafunc.parametrize instead."
)
PYTEST_PLUGINS_FROM_NON_TOP_LEVEL_CONFTEST = RemovedInPytest4Warning(
"Defining pytest_plugins in a non-top-level conftest is deprecated, "
"because it affects the entire directory tree in a non-explicit way.\n"
"Please move it to the top level conftest file instead."
)

View File

@ -2,6 +2,8 @@
from __future__ import absolute_import, division, print_function
import traceback
import sys
import platform
import pytest
from _pytest._code.code import ExceptionInfo, ReprFileLocation, TerminalRepr
@ -22,39 +24,54 @@ DOCTEST_REPORT_CHOICES = (
DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE,
)
# Lazy definiton of runner class
RUNNER_CLASS = None
def pytest_addoption(parser):
parser.addini('doctest_optionflags', 'option flags for doctests',
type="args", default=["ELLIPSIS"])
type="args", default=["ELLIPSIS"])
parser.addini("doctest_encoding", 'encoding used for doctest files', default="utf-8")
group = parser.getgroup("collect")
group.addoption("--doctest-modules",
action="store_true", default=False,
help="run doctests in all .py modules",
dest="doctestmodules")
action="store_true", default=False,
help="run doctests in all .py modules",
dest="doctestmodules")
group.addoption("--doctest-report",
type=str.lower, default="udiff",
help="choose another output format for diffs on doctest failure",
choices=DOCTEST_REPORT_CHOICES,
dest="doctestreport")
type=str.lower, default="udiff",
help="choose another output format for diffs on doctest failure",
choices=DOCTEST_REPORT_CHOICES,
dest="doctestreport")
group.addoption("--doctest-glob",
action="append", default=[], metavar="pat",
help="doctests file matching pattern, default: test*.txt",
dest="doctestglob")
action="append", default=[], metavar="pat",
help="doctests file matching pattern, default: test*.txt",
dest="doctestglob")
group.addoption("--doctest-ignore-import-errors",
action="store_true", default=False,
help="ignore doctest ImportErrors",
dest="doctest_ignore_import_errors")
action="store_true", default=False,
help="ignore doctest ImportErrors",
dest="doctest_ignore_import_errors")
group.addoption("--doctest-continue-on-failure",
action="store_true", default=False,
help="for a given doctest, continue to run after the first failure",
dest="doctest_continue_on_failure")
def pytest_collect_file(path, parent):
config = parent.config
if path.ext == ".py":
if config.option.doctestmodules:
if config.option.doctestmodules and not _is_setup_py(config, path, parent):
return DoctestModule(path, parent)
elif _is_doctest(config, path, parent):
return DoctestTextfile(path, parent)
def _is_setup_py(config, path, parent):
if path.basename != "setup.py":
return False
contents = path.read()
return 'setuptools' in contents or 'distutils' in contents
def _is_doctest(config, path, parent):
if path.ext in ('.txt', '.rst') and parent.session.isinitpath(path):
return True
@ -67,14 +84,63 @@ def _is_doctest(config, path, parent):
class ReprFailDoctest(TerminalRepr):
def __init__(self, reprlocation, lines):
self.reprlocation = reprlocation
self.lines = lines
def __init__(self, reprlocation_lines):
# List of (reprlocation, lines) tuples
self.reprlocation_lines = reprlocation_lines
def toterminal(self, tw):
for line in self.lines:
tw.line(line)
self.reprlocation.toterminal(tw)
for reprlocation, lines in self.reprlocation_lines:
for line in lines:
tw.line(line)
reprlocation.toterminal(tw)
class MultipleDoctestFailures(Exception):
def __init__(self, failures):
super(MultipleDoctestFailures, self).__init__()
self.failures = failures
def _init_runner_class():
import doctest
class PytestDoctestRunner(doctest.DebugRunner):
"""
Runner to collect failures. Note that the out variable in this case is
a list instead of a stdout-like object
"""
def __init__(self, checker=None, verbose=None, optionflags=0,
continue_on_failure=True):
doctest.DebugRunner.__init__(
self, checker=checker, verbose=verbose, optionflags=optionflags)
self.continue_on_failure = continue_on_failure
def report_failure(self, out, test, example, got):
failure = doctest.DocTestFailure(test, example, got)
if self.continue_on_failure:
out.append(failure)
else:
raise failure
def report_unexpected_exception(self, out, test, example, exc_info):
failure = doctest.UnexpectedException(test, example, exc_info)
if self.continue_on_failure:
out.append(failure)
else:
raise failure
return PytestDoctestRunner
def _get_runner(checker=None, verbose=None, optionflags=0,
continue_on_failure=True):
# We need this in order to do a lazy import on doctest
global RUNNER_CLASS
if RUNNER_CLASS is None:
RUNNER_CLASS = _init_runner_class()
return RUNNER_CLASS(
checker=checker, verbose=verbose, optionflags=optionflags,
continue_on_failure=continue_on_failure)
class DoctestItem(pytest.Item):
@ -95,51 +161,76 @@ class DoctestItem(pytest.Item):
def runtest(self):
_check_all_skipped(self.dtest)
self.runner.run(self.dtest)
self._disable_output_capturing_for_darwin()
failures = []
self.runner.run(self.dtest, out=failures)
if failures:
raise MultipleDoctestFailures(failures)
def _disable_output_capturing_for_darwin(self):
"""
Disable output capturing. Otherwise, stdout is lost to doctest (#985)
"""
if platform.system() != 'Darwin':
return
capman = self.config.pluginmanager.getplugin("capturemanager")
if capman:
out, err = capman.suspend_global_capture(in_=True)
sys.stdout.write(out)
sys.stderr.write(err)
def repr_failure(self, excinfo):
import doctest
failures = None
if excinfo.errisinstance((doctest.DocTestFailure,
doctest.UnexpectedException)):
doctestfailure = excinfo.value
example = doctestfailure.example
test = doctestfailure.test
filename = test.filename
if test.lineno is None:
lineno = None
else:
lineno = test.lineno + example.lineno + 1
message = excinfo.type.__name__
reprlocation = ReprFileLocation(filename, lineno, message)
checker = _get_checker()
report_choice = _get_report_choice(self.config.getoption("doctestreport"))
if lineno is not None:
lines = doctestfailure.test.docstring.splitlines(False)
# add line numbers to the left of the error message
lines = ["%03d %s" % (i + test.lineno + 1, x)
for (i, x) in enumerate(lines)]
# trim docstring error lines to 10
lines = lines[example.lineno - 9:example.lineno + 1]
else:
lines = ['EXAMPLE LOCATION UNKNOWN, not showing all tests of that example']
indent = '>>>'
for line in example.source.splitlines():
lines.append('??? %s %s' % (indent, line))
indent = '...'
if excinfo.errisinstance(doctest.DocTestFailure):
lines += checker.output_difference(example,
doctestfailure.got, report_choice).split("\n")
else:
inner_excinfo = ExceptionInfo(excinfo.value.exc_info)
lines += ["UNEXPECTED EXCEPTION: %s" %
repr(inner_excinfo.value)]
lines += traceback.format_exception(*excinfo.value.exc_info)
return ReprFailDoctest(reprlocation, lines)
failures = [excinfo.value]
elif excinfo.errisinstance(MultipleDoctestFailures):
failures = excinfo.value.failures
if failures is not None:
reprlocation_lines = []
for failure in failures:
example = failure.example
test = failure.test
filename = test.filename
if test.lineno is None:
lineno = None
else:
lineno = test.lineno + example.lineno + 1
message = type(failure).__name__
reprlocation = ReprFileLocation(filename, lineno, message)
checker = _get_checker()
report_choice = _get_report_choice(self.config.getoption("doctestreport"))
if lineno is not None:
lines = failure.test.docstring.splitlines(False)
# add line numbers to the left of the error message
lines = ["%03d %s" % (i + test.lineno + 1, x)
for (i, x) in enumerate(lines)]
# trim docstring error lines to 10
lines = lines[max(example.lineno - 9, 0):example.lineno + 1]
else:
lines = ['EXAMPLE LOCATION UNKNOWN, not showing all tests of that example']
indent = '>>>'
for line in example.source.splitlines():
lines.append('??? %s %s' % (indent, line))
indent = '...'
if isinstance(failure, doctest.DocTestFailure):
lines += checker.output_difference(example,
failure.got,
report_choice).split("\n")
else:
inner_excinfo = ExceptionInfo(failure.exc_info)
lines += ["UNEXPECTED EXCEPTION: %s" %
repr(inner_excinfo.value)]
lines += traceback.format_exception(*failure.exc_info)
reprlocation_lines.append((reprlocation, lines))
return ReprFailDoctest(reprlocation_lines)
else:
return super(DoctestItem, self).repr_failure(excinfo)
def reportinfo(self):
return self.fspath, None, "[doctest] %s" % self.name
return self.fspath, self.dtest.lineno, "[doctest] %s" % self.name
def _get_flag_lookup():
@ -163,6 +254,17 @@ def get_optionflags(parent):
flag_acc |= flag_lookup_table[flag]
return flag_acc
def _get_continue_on_failure(config):
continue_on_failure = config.getvalue('doctest_continue_on_failure')
if continue_on_failure:
# We need to turn off this if we use pdb since we should stop at
# the first failure
if config.getvalue("usepdb"):
continue_on_failure = False
return continue_on_failure
class DoctestTextfile(pytest.Module):
obj = None
@ -178,8 +280,11 @@ class DoctestTextfile(pytest.Module):
globs = {'__name__': '__main__'}
optionflags = get_optionflags(self)
runner = doctest.DebugRunner(verbose=0, optionflags=optionflags,
checker=_get_checker())
runner = _get_runner(
verbose=0, optionflags=optionflags,
checker=_get_checker(),
continue_on_failure=_get_continue_on_failure(self.config))
_fix_spoof_python2(runner, encoding)
parser = doctest.DocTestParser()
@ -214,8 +319,10 @@ class DoctestModule(pytest.Module):
# uses internal doctest module parsing mechanism
finder = doctest.DocTestFinder()
optionflags = get_optionflags(self)
runner = doctest.DebugRunner(verbose=0, optionflags=optionflags,
checker=_get_checker())
runner = _get_runner(
verbose=0, optionflags=optionflags,
checker=_get_checker(),
continue_on_failure=_get_continue_on_failure(self.config))
for test in finder.find(module, module.__name__):
if test.examples: # skip empty doctests
@ -332,7 +439,7 @@ def _fix_spoof_python2(runner, encoding):
should patch only doctests for text files because they don't have a way to declare their
encoding. Doctests in docstrings from Python modules don't have the same problem given that
Python already decoded the strings.
This fixes the problem related in issue #2434.
"""
from _pytest.compat import _PY2
@ -355,6 +462,6 @@ def _fix_spoof_python2(runner, encoding):
@pytest.fixture(scope='session')
def doctest_namespace():
"""
Inject names into the doctest namespace.
Fixture that returns a :py:class:`dict` that will be injected into the namespace of doctests.
"""
return dict()

View File

@ -1,13 +1,18 @@
from __future__ import absolute_import, division, print_function
import sys
import functools
import inspect
import sys
import warnings
from collections import OrderedDict, deque, defaultdict
from more_itertools import flatten
import attr
import py
from py._code.code import FormattedExcinfo
import py
import warnings
import inspect
import _pytest
from _pytest import nodes
from _pytest._code.code import TerminalRepr
from _pytest.compat import (
NOTSET, exc_clear, _format_args,
@ -15,16 +20,26 @@ from _pytest.compat import (
is_generator, isclass, getimfunc,
getlocation, getfuncargnames,
safe_getattr,
FuncargnamesCompatAttr,
)
from _pytest.runner import fail
from _pytest.compat import FuncargnamesCompatAttr
from _pytest.outcomes import fail, TEST_OUTCOME
@attr.s(frozen=True)
class PseudoFixtureDef(object):
cached_result = attr.ib()
scope = attr.ib()
def pytest_sessionstart(session):
import _pytest.python
import _pytest.nodes
scopename2class.update({
'class': _pytest.python.Class,
'module': _pytest.python.Module,
'function': _pytest.main.Item,
'function': _pytest.nodes.Item,
'session': _pytest.main.Session,
})
session._fixturemanager = FixtureManager(session)
@ -38,6 +53,7 @@ scope2props["class"] = scope2props["module"] + ("cls",)
scope2props["instance"] = scope2props["class"] + ("instance", )
scope2props["function"] = scope2props["instance"] + ("function", "keywords")
def scopeproperty(name=None, doc=None):
def decoratescope(func):
scopename = name or func.__name__
@ -55,8 +71,6 @@ def scopeproperty(name=None, doc=None):
def get_scope_node(node, scope):
cls = scopename2class.get(scope)
if cls is None:
if scope == "session":
return node.session
raise ValueError("unknown scope")
return node.getparent(cls)
@ -69,7 +83,7 @@ def add_funcarg_pseudo_fixture_def(collector, metafunc, fixturemanager):
# XXX we can probably avoid this algorithm if we modify CallSpec2
# to directly care for creating the fixturedefs within its methods.
if not metafunc._calls[0].funcargs:
return # this function call does not have direct parametrization
return # this function call does not have direct parametrization
# collect funcargs of all callspecs into a list of values
arg2params = {}
arg2scope = {}
@ -105,28 +119,26 @@ def add_funcarg_pseudo_fixture_def(collector, metafunc, fixturemanager):
if node and argname in node._name2pseudofixturedef:
arg2fixturedefs[argname] = [node._name2pseudofixturedef[argname]]
else:
fixturedef = FixtureDef(fixturemanager, '', argname,
get_direct_param_fixture_func,
arg2scope[argname],
valuelist, False, False)
fixturedef = FixtureDef(fixturemanager, '', argname,
get_direct_param_fixture_func,
arg2scope[argname],
valuelist, False, False)
arg2fixturedefs[argname] = [fixturedef]
if node is not None:
node._name2pseudofixturedef[argname] = fixturedef
def getfixturemarker(obj):
""" return fixturemarker or None if it doesn't exist or raised
exceptions."""
try:
return getattr(obj, "_pytestfixturefunction", None)
except Exception:
except TEST_OUTCOME:
# some objects raise errors like request (from flask import request)
# we don't expect them to be fixture functions
return None
def get_parametrized_fixture_keys(item, scopenum):
""" return list of keys for all parametrized arguments which match
the specified scope. """
@ -136,10 +148,10 @@ def get_parametrized_fixture_keys(item, scopenum):
except AttributeError:
pass
else:
# cs.indictes.items() is random order of argnames but
# then again different functions (items) can change order of
# arguments so it doesn't matter much probably
for argname, param_index in cs.indices.items():
# cs.indices.items() is random order of argnames. Need to
# sort this so that different calls to
# get_parametrized_fixture_keys will be deterministic.
for argname, param_index in sorted(cs.indices.items()):
if cs._arg2scopenum[argname] != scopenum:
continue
if scopenum == 0: # session
@ -158,61 +170,59 @@ def get_parametrized_fixture_keys(item, scopenum):
def reorder_items(items):
argkeys_cache = {}
items_by_argkey = {}
for scopenum in range(0, scopenum_function):
argkeys_cache[scopenum] = d = {}
items_by_argkey[scopenum] = item_d = defaultdict(deque)
for item in items:
keys = set(get_parametrized_fixture_keys(item, scopenum))
keys = OrderedDict.fromkeys(get_parametrized_fixture_keys(item, scopenum))
if keys:
d[item] = keys
return reorder_items_atscope(items, set(), argkeys_cache, 0)
for key in keys:
item_d[key].append(item)
items = OrderedDict.fromkeys(items)
return list(reorder_items_atscope(items, argkeys_cache, items_by_argkey, 0))
def reorder_items_atscope(items, ignore, argkeys_cache, scopenum):
def fix_cache_order(item, argkeys_cache, items_by_argkey):
for scopenum in range(0, scopenum_function):
for key in argkeys_cache[scopenum].get(item, []):
items_by_argkey[scopenum][key].appendleft(item)
def reorder_items_atscope(items, argkeys_cache, items_by_argkey, scopenum):
if scopenum >= scopenum_function or len(items) < 3:
return items
items_done = []
while 1:
items_before, items_same, items_other, newignore = \
slice_items(items, ignore, argkeys_cache[scopenum])
items_before = reorder_items_atscope(
items_before, ignore, argkeys_cache,scopenum+1)
if items_same is None:
# nothing to reorder in this scope
assert items_other is None
return items_done + items_before
items_done.extend(items_before)
items = items_same + items_other
ignore = newignore
def slice_items(items, ignore, scoped_argkeys_cache):
# we pick the first item which uses a fixture instance in the
# requested scope and which we haven't seen yet. We slice the input
# items list into a list of items_nomatch, items_same and
# items_other
if scoped_argkeys_cache: # do we need to do work at all?
it = iter(items)
# first find a slicing key
for i, item in enumerate(it):
argkeys = scoped_argkeys_cache.get(item)
if argkeys is not None:
argkeys = argkeys.difference(ignore)
if argkeys: # found a slicing key
slicing_argkey = argkeys.pop()
items_before = items[:i]
items_same = [item]
items_other = []
# now slice the remainder of the list
for item in it:
argkeys = scoped_argkeys_cache.get(item)
if argkeys and slicing_argkey in argkeys and \
slicing_argkey not in ignore:
items_same.append(item)
else:
items_other.append(item)
newignore = ignore.copy()
newignore.add(slicing_argkey)
return (items_before, items_same, items_other, newignore)
return items, None, None, None
ignore = set()
items_deque = deque(items)
items_done = OrderedDict()
scoped_items_by_argkey = items_by_argkey[scopenum]
scoped_argkeys_cache = argkeys_cache[scopenum]
while items_deque:
no_argkey_group = OrderedDict()
slicing_argkey = None
while items_deque:
item = items_deque.popleft()
if item in items_done or item in no_argkey_group:
continue
argkeys = OrderedDict.fromkeys(k for k in scoped_argkeys_cache.get(item, []) if k not in ignore)
if not argkeys:
no_argkey_group[item] = None
else:
slicing_argkey, _ = argkeys.popitem()
# we don't have to remove relevant items from later in the deque because they'll just be ignored
matching_items = [i for i in scoped_items_by_argkey[slicing_argkey] if i in items]
for i in reversed(matching_items):
fix_cache_order(i, argkeys_cache, items_by_argkey)
items_deque.appendleft(i)
break
if no_argkey_group:
no_argkey_group = reorder_items_atscope(
no_argkey_group, argkeys_cache, items_by_argkey, scopenum + 1)
for item in no_argkey_group:
items_done[item] = None
ignore.add(slicing_argkey)
return items_done
def fillfixtures(function):
@ -237,11 +247,11 @@ def fillfixtures(function):
request._fillfixtures()
def get_direct_param_fixture_func(request):
return request.param
class FuncFixtureInfo:
class FuncFixtureInfo(object):
def __init__(self, argnames, names_closure, name2fixturedefs):
self.argnames = argnames
self.names_closure = names_closure
@ -262,7 +272,6 @@ class FixtureRequest(FuncargnamesCompatAttr):
self.fixturename = None
#: Scope string, one of "function", "class", "module", "session"
self.scope = "function"
self._fixture_values = {} # argname -> fixture value
self._fixture_defs = {} # argname -> FixtureDef
fixtureinfo = pyfuncitem._fixtureinfo
self._arg2fixturedefs = fixtureinfo.name2fixturedefs.copy()
@ -279,7 +288,6 @@ class FixtureRequest(FuncargnamesCompatAttr):
""" underlying collection node (depends on current request scope)"""
return self._getscopeitem(self.scope)
def _getnextfixturedef(self, argname):
fixturedefs = self._arg2fixturedefs.get(argname, None)
if fixturedefs is None:
@ -301,7 +309,6 @@ class FixtureRequest(FuncargnamesCompatAttr):
""" the pytest config object associated with this request. """
return self._pyfuncitem.config
@scopeproperty()
def function(self):
""" test function object if the request has a per-function scope. """
@ -365,10 +372,7 @@ class FixtureRequest(FuncargnamesCompatAttr):
:arg marker: a :py:class:`_pytest.mark.MarkDecorator` object
created by a call to ``pytest.mark.NAME(...)``.
"""
try:
self.node.keywords[marker.markname] = marker
except AttributeError:
raise ValueError(marker)
self.node.add_marker(marker)
def raiseerror(self, msg):
""" raise a FixtureLookupError with the given message. """
@ -397,7 +401,7 @@ class FixtureRequest(FuncargnamesCompatAttr):
:arg extrakey: added to internal caching key of (funcargname, scope).
"""
if not hasattr(self.config, '_setupcache'):
self.config._setupcache = {} # XXX weakref?
self.config._setupcache = {} # XXX weakref?
cachekey = (self.fixturename, self._getscopeitem(scope), extrakey)
cache = self.config._setupcache
try:
@ -428,7 +432,8 @@ class FixtureRequest(FuncargnamesCompatAttr):
from _pytest import deprecated
warnings.warn(
deprecated.GETFUNCARGVALUE,
DeprecationWarning)
DeprecationWarning,
stacklevel=2)
return self.getfixturevalue(argname)
def _get_active_fixturedef(self, argname):
@ -439,30 +444,35 @@ class FixtureRequest(FuncargnamesCompatAttr):
fixturedef = self._getnextfixturedef(argname)
except FixtureLookupError:
if argname == "request":
class PseudoFixtureDef:
cached_result = (self, [0], None)
scope = "function"
return PseudoFixtureDef
cached_result = (self, [0], None)
scope = "function"
return PseudoFixtureDef(cached_result, scope)
raise
# remove indent to prevent the python3 exception
# from leaking into the call
result = self._getfixturevalue(fixturedef)
self._fixture_values[argname] = result
self._compute_fixture_value(fixturedef)
self._fixture_defs[argname] = fixturedef
return fixturedef
def _get_fixturestack(self):
current = self
l = []
values = []
while 1:
fixturedef = getattr(current, "_fixturedef", None)
if fixturedef is None:
l.reverse()
return l
l.append(fixturedef)
values.reverse()
return values
values.append(fixturedef)
current = current._parent_request
def _getfixturevalue(self, fixturedef):
def _compute_fixture_value(self, fixturedef):
"""
Creates a SubRequest based on "self" and calls the execute method of the given fixturedef object. This will
force the FixtureDef object to throw away any previous results and compute a new fixture value, which
will be stored into the FixtureDef object itself.
:param FixtureDef fixturedef:
"""
# prepare a subrequest object before calling fixture function
# (latter managed by fixturedef)
argname = fixturedef.argname
@ -511,12 +521,11 @@ class FixtureRequest(FuncargnamesCompatAttr):
exc_clear()
try:
# call the fixture function
val = fixturedef.execute(request=subrequest)
fixturedef.execute(request=subrequest)
finally:
# if fixture function failed it might have registered finalizers
self.session._setupstate.addfinalizer(fixturedef.finish,
self.session._setupstate.addfinalizer(functools.partial(fixturedef.finish, request=subrequest),
subrequest.node)
return val
def _check_scope(self, argname, invoking_scope, requested_scope):
if argname == "request":
@ -527,8 +536,8 @@ class FixtureRequest(FuncargnamesCompatAttr):
fail("ScopeMismatch: You tried to access the %r scoped "
"fixture %r with a %r scoped request object, "
"involved factories\n%s" % (
(requested_scope, argname, invoking_scope, "\n".join(lines))),
pytrace=False)
(requested_scope, argname, invoking_scope, "\n".join(lines))),
pytrace=False)
def _factorytraceback(self):
lines = []
@ -549,16 +558,17 @@ class FixtureRequest(FuncargnamesCompatAttr):
if node is None and scope == "class":
# fallback to function item itself
node = self._pyfuncitem
assert node
assert node, 'Could not obtain a node for scope "{}" for function {!r}'.format(scope, self._pyfuncitem)
return node
def __repr__(self):
return "<FixtureRequest for %r>" %(self.node)
return "<FixtureRequest for %r>" % (self.node)
class SubRequest(FixtureRequest):
""" a sub request for handling getting a fixture from a
test function/fixture. """
def __init__(self, request, scope, param, param_index, fixturedef):
self._parent_request = request
self.fixturename = fixturedef.argname
@ -567,9 +577,7 @@ class SubRequest(FixtureRequest):
self.param_index = param_index
self.scope = scope
self._fixturedef = fixturedef
self.addfinalizer = fixturedef.addfinalizer
self._pyfuncitem = request._pyfuncitem
self._fixture_values = request._fixture_values
self._fixture_defs = request._fixture_defs
self._arg2fixturedefs = request._arg2fixturedefs
self._arg2index = request._arg2index
@ -578,6 +586,9 @@ class SubRequest(FixtureRequest):
def __repr__(self):
return "<SubRequest %r for %r>" % (self.fixturename, self._pyfuncitem)
def addfinalizer(self, finalizer):
self._fixturedef.addfinalizer(finalizer)
class ScopeMismatchError(Exception):
""" A fixture function tries to use a different fixture function which
@ -609,6 +620,7 @@ def scope2index(scope, descr, where=None):
class FixtureLookupError(LookupError):
""" could not return a requested Fixture (missing or invalid). """
def __init__(self, argname, request, msg=None):
self.argname = argname
self.request = request
@ -631,9 +643,9 @@ class FixtureLookupError(LookupError):
lines, _ = inspect.getsourcelines(get_real_func(function))
except (IOError, IndexError, TypeError):
error_msg = "file %s, line %s: source code not available"
addline(error_msg % (fspath, lineno+1))
addline(error_msg % (fspath, lineno + 1))
else:
addline("file %s, line %s" % (fspath, lineno+1))
addline("file %s, line %s" % (fspath, lineno + 1))
for i, line in enumerate(lines):
line = line.rstrip()
addline(" " + line)
@ -649,7 +661,7 @@ class FixtureLookupError(LookupError):
if faclist and name not in available:
available.append(name)
msg = "fixture %r not found" % (self.argname,)
msg += "\n available fixtures: %s" %(", ".join(sorted(available)),)
msg += "\n available fixtures: %s" % (", ".join(sorted(available)),)
msg += "\n use 'pytest --fixtures [testpath]' for help on them."
return FixtureLookupErrorRepr(fspath, lineno, tblines, msg, self.argname)
@ -675,12 +687,12 @@ class FixtureLookupErrorRepr(TerminalRepr):
tw.line('{0} {1}'.format(FormattedExcinfo.flow_marker,
line.strip()), red=True)
tw.line()
tw.line("%s:%d" % (self.filename, self.firstlineno+1))
tw.line("%s:%d" % (self.filename, self.firstlineno + 1))
def fail_fixturefunc(fixturefunc, msg):
fs, lineno = getfslineno(fixturefunc)
location = "%s:%s" % (fs, lineno+1)
location = "%s:%s" % (fs, lineno + 1)
source = _pytest._code.Source(fixturefunc)
fail(msg + ":\n\n" + str(source.indent()) + "\n" + location,
pytrace=False)
@ -699,7 +711,7 @@ def call_fixture_func(fixturefunc, request, kwargs):
pass
else:
fail_fixturefunc(fixturefunc,
"yield_fixture function has more than one 'yield'")
"yield_fixture function has more than one 'yield'")
request.addfinalizer(teardown)
else:
@ -707,8 +719,9 @@ def call_fixture_func(fixturefunc, request, kwargs):
return res
class FixtureDef:
class FixtureDef(object):
""" A container for a factory definition. """
def __init__(self, fixturemanager, baseid, argname, func, scope, params,
unittest=False, ids=None):
self._fixturemanager = fixturemanager
@ -723,23 +736,22 @@ class FixtureDef:
where=baseid
)
self.params = params
startindex = unittest and 1 or None
self.argnames = getfuncargnames(func, startindex=startindex)
self.argnames = getfuncargnames(func, is_method=unittest)
self.unittest = unittest
self.ids = ids
self._finalizer = []
self._finalizers = []
def addfinalizer(self, finalizer):
self._finalizer.append(finalizer)
self._finalizers.append(finalizer)
def finish(self):
def finish(self, request):
exceptions = []
try:
while self._finalizer:
while self._finalizers:
try:
func = self._finalizer.pop()
func = self._finalizers.pop()
func()
except:
except: # noqa
exceptions.append(sys.exc_info())
if exceptions:
e = exceptions[0]
@ -747,12 +759,15 @@ class FixtureDef:
py.builtin._reraise(*e)
finally:
ihook = self._fixturemanager.session.ihook
ihook.pytest_fixture_post_finalizer(fixturedef=self)
hook = self._fixturemanager.session.gethookproxy(request.node.fspath)
hook.pytest_fixture_post_finalizer(fixturedef=self, request=request)
# even if finalization fails, we invalidate
# the cached fixture value
# the cached fixture value and remove
# all finalizers because they may be bound methods which will
# keep instances alive
if hasattr(self, "cached_result"):
del self.cached_result
self._finalizers = []
def execute(self, request):
# get required arguments and register our own finish()
@ -760,7 +775,7 @@ class FixtureDef:
for argname in self.argnames:
fixturedef = request._get_active_fixturedef(argname)
if argname != "request":
fixturedef.addfinalizer(self.finish)
fixturedef.addfinalizer(functools.partial(self.finish, request=request))
my_cache_key = request.param_index
cached_result = getattr(self, "cached_result", None)
@ -773,16 +788,17 @@ class FixtureDef:
return result
# we have a previous but differently parametrized fixture instance
# so we need to tear it down before creating a new one
self.finish()
self.finish(request)
assert not hasattr(self, "cached_result")
ihook = self._fixturemanager.session.ihook
return ihook.pytest_fixture_setup(fixturedef=self, request=request)
hook = self._fixturemanager.session.gethookproxy(request.node.fspath)
return hook.pytest_fixture_setup(fixturedef=self, request=request)
def __repr__(self):
return ("<FixtureDef name=%r scope=%r baseid=%r >" %
(self.argname, self.scope, self.baseid))
def pytest_fixture_setup(fixturedef, request):
""" Execution of fixture setup. """
kwargs = {}
@ -808,25 +824,34 @@ def pytest_fixture_setup(fixturedef, request):
my_cache_key = request.param_index
try:
result = call_fixture_func(fixturefunc, request, kwargs)
except Exception:
except TEST_OUTCOME:
fixturedef.cached_result = (None, my_cache_key, sys.exc_info())
raise
fixturedef.cached_result = (result, my_cache_key, None)
return result
class FixtureFunctionMarker:
def __init__(self, scope, params, autouse=False, ids=None, name=None):
self.scope = scope
self.params = params
self.autouse = autouse
self.ids = ids
self.name = name
def _ensure_immutable_ids(ids):
if ids is None:
return
if callable(ids):
return ids
return tuple(ids)
@attr.s(frozen=True)
class FixtureFunctionMarker(object):
scope = attr.ib()
params = attr.ib(converter=attr.converters.optional(tuple))
autouse = attr.ib(default=False)
ids = attr.ib(default=None, converter=_ensure_immutable_ids)
name = attr.ib(default=None)
def __call__(self, function):
if isclass(function):
raise ValueError(
"class fixtures not supported (may be in the future)")
"class fixtures not supported (may be in the future)")
if getattr(function, "_pytestfixturefunction", False):
raise ValueError(
"fixture is being applied more than once to the same function")
@ -835,9 +860,8 @@ class FixtureFunctionMarker:
return function
def fixture(scope="function", params=None, autouse=False, ids=None, name=None):
""" (return a) decorator to mark a fixture factory function.
"""Decorator to mark a fixture factory function.
This decorator can be used (with or without parameters) to define a
fixture function. The name of the fixture function can later be
@ -874,10 +898,10 @@ def fixture(scope="function", params=None, autouse=False, ids=None, name=None):
instead of ``return``. In this case, the code block after the ``yield`` statement is executed
as teardown code regardless of the test outcome. A fixture function must yield exactly once.
"""
if callable(scope) and params is None and autouse == False:
if callable(scope) and params is None and autouse is False:
# direct decoration
return FixtureFunctionMarker(
"function", params, autouse, name=name)(scope)
"function", params, autouse, name=name)(scope)
if params is not None and not isinstance(params, (list, tuple)):
params = list(params)
return FixtureFunctionMarker(scope, params, autouse, ids=ids, name=name)
@ -892,7 +916,7 @@ def yield_fixture(scope="function", params=None, autouse=False, ids=None, name=N
if callable(scope) and params is None and not autouse:
# direct decoration
return FixtureFunctionMarker(
"function", params, autouse, ids=ids, name=name)(scope)
"function", params, autouse, ids=ids, name=name)(scope)
else:
return FixtureFunctionMarker(scope, params, autouse, ids=ids, name=name)
@ -902,11 +926,19 @@ defaultfuncargprefixmarker = fixture()
@fixture(scope="session")
def pytestconfig(request):
""" the pytest config object with access to command line opts."""
"""Session-scoped fixture that returns the :class:`_pytest.config.Config` object.
Example::
def test_foo(pytestconfig):
if pytestconfig.getoption("verbose"):
...
"""
return request.config
class FixtureManager:
class FixtureManager(object):
"""
pytest fixtures definitions and information is stored and managed
from this class.
@ -951,20 +983,14 @@ class FixtureManager:
self._nodeid_and_autousenames = [("", self.config.getini("usefixtures"))]
session.config.pluginmanager.register(self, "funcmanage")
def getfixtureinfo(self, node, func, cls, funcargs=True):
if funcargs and not hasattr(node, "nofuncargs"):
if cls is not None:
startindex = 1
else:
startindex = None
argnames = getfuncargnames(func, startindex)
argnames = getfuncargnames(func, cls=cls)
else:
argnames = ()
usefixtures = getattr(func, "usefixtures", None)
usefixtures = flatten(mark.args for mark in node.iter_markers() if mark.name == "usefixtures")
initialnames = argnames
if usefixtures is not None:
initialnames = usefixtures.args + initialnames
initialnames = tuple(usefixtures) + initialnames
fm = node.session._fixturemanager
names_closure, arg2fixturedefs = fm.getfixtureclosure(initialnames,
node)
@ -982,8 +1008,8 @@ class FixtureManager:
# by their test id)
if p.basename.startswith("conftest.py"):
nodeid = p.dirpath().relto(self.config.rootdir)
if p.sep != "/":
nodeid = nodeid.replace(p.sep, "/")
if p.sep != nodes.SEP:
nodeid = nodeid.replace(p.sep, nodes.SEP)
self.parsefactories(plugin, nodeid)
def _getautousenames(self, nodeid):
@ -993,13 +1019,10 @@ class FixtureManager:
if nodeid.startswith(baseid):
if baseid:
i = len(baseid)
nextchar = nodeid[i:i+1]
nextchar = nodeid[i:i + 1]
if nextchar and nextchar not in ":/":
continue
autousenames.extend(basenames)
# make sure autousenames are sorted by scope, scopenum 0 is session
autousenames.sort(
key=lambda x: self._arg2fixturedefs[x][-1].scopenum)
return autousenames
def getfixtureclosure(self, fixturenames, parentnode):
@ -1030,6 +1053,16 @@ class FixtureManager:
if fixturedefs:
arg2fixturedefs[argname] = fixturedefs
merge(fixturedefs[-1].argnames)
def sort_by_scope(arg_name):
try:
fixturedefs = arg2fixturedefs[arg_name]
except KeyError:
return scopes.index('function')
else:
return fixturedefs[-1].scopenum
fixturenames_closure.sort(key=sort_by_scope)
return fixturenames_closure, arg2fixturedefs
def pytest_generate_tests(self, metafunc):
@ -1038,9 +1071,16 @@ class FixtureManager:
if faclist:
fixturedef = faclist[-1]
if fixturedef.params is not None:
func_params = getattr(getattr(metafunc.function, 'parametrize', None), 'args', [[None]])
parametrize_func = getattr(metafunc.function, 'parametrize', None)
if parametrize_func is not None:
parametrize_func = parametrize_func.combined
func_params = getattr(parametrize_func, 'args', [[None]])
func_kwargs = getattr(parametrize_func, 'kwargs', {})
# skip directly parametrized arguments
argnames = func_params[0]
if "argnames" in func_kwargs:
argnames = parametrize_func.kwargs["argnames"]
else:
argnames = func_params[0]
if not isinstance(argnames, (tuple, list)):
argnames = [x.strip() for x in argnames.split(",") if x.strip()]
if argname not in func_params and argname not in argnames:
@ -1128,6 +1168,5 @@ class FixtureManager:
def _matchfactories(self, fixturedefs, nodeid):
for fixturedef in fixturedefs:
if nodeid.startswith(fixturedef.baseid):
if nodes.ischildnode(fixturedef.baseid, nodeid):
yield fixturedef

View File

@ -5,7 +5,6 @@ pytest
from __future__ import absolute_import, division, print_function
def freeze_includes():
"""
Returns a list of module names used by py.test that should be

View File

@ -4,7 +4,8 @@ from __future__ import absolute_import, division, print_function
import py
import pytest
from _pytest.config import PrintHelp
import os, sys
import os
import sys
from argparse import Action
@ -41,24 +42,24 @@ class HelpAction(Action):
def pytest_addoption(parser):
group = parser.getgroup('debugconfig')
group.addoption('--version', action="store_true",
help="display pytest lib version and import information.")
help="display pytest lib version and import information.")
group._addoption("-h", "--help", action=HelpAction, dest="help",
help="show help message and configuration info")
group._addoption('-p', action="append", dest="plugins", default = [],
metavar="name",
help="early-load given plugin (multi-allowed). "
"To avoid loading of plugins, use the `no:` prefix, e.g. "
"`no:doctest`.")
help="show help message and configuration info")
group._addoption('-p', action="append", dest="plugins", default=[],
metavar="name",
help="early-load given plugin (multi-allowed). "
"To avoid loading of plugins, use the `no:` prefix, e.g. "
"`no:doctest`.")
group.addoption('--traceconfig', '--trace-config',
action="store_true", default=False,
help="trace considerations of conftest.py files."),
action="store_true", default=False,
help="trace considerations of conftest.py files."),
group.addoption('--debug',
action="store_true", dest="debug", default=False,
help="store internal tracing debug information in 'pytestdebug.log'.")
action="store_true", dest="debug", default=False,
help="store internal tracing debug information in 'pytestdebug.log'.")
group._addoption(
'-o', '--override-ini', nargs='*', dest="override_ini",
'-o', '--override-ini', dest="override_ini",
action="append",
help="override config option with option=value style, e.g. `-o xfail_strict=True`.")
help='override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`.')
@pytest.hookimpl(hookwrapper=True)
@ -69,10 +70,10 @@ def pytest_cmdline_parse():
path = os.path.abspath("pytestdebug.log")
debugfile = open(path, 'w')
debugfile.write("versions pytest-%s, py-%s, "
"python-%s\ncwd=%s\nargs=%s\n\n" %(
pytest.__version__, py.__version__,
".".join(map(str, sys.version_info)),
os.getcwd(), config._origargs))
"python-%s\ncwd=%s\nargs=%s\n\n" % (
pytest.__version__, py.__version__,
".".join(map(str, sys.version_info)),
os.getcwd(), config._origargs))
config.trace.root.setwriter(debugfile.write)
undo_tracing = config.pluginmanager.enable_tracing()
sys.stderr.write("writing pytestdebug information to %s\n" % path)
@ -86,11 +87,12 @@ def pytest_cmdline_parse():
config.add_cleanup(unset_tracing)
def pytest_cmdline_main(config):
if config.option.version:
p = py.path.local(pytest.__file__)
sys.stderr.write("This is pytest version %s, imported from %s\n" %
(pytest.__version__, p))
(pytest.__version__, p))
plugininfo = getpluginversioninfo(config)
if plugininfo:
for line in plugininfo:
@ -102,6 +104,7 @@ def pytest_cmdline_main(config):
config._ensure_unconfigure()
return 0
def showhelp(config):
reporter = config.pluginmanager.get_plugin('terminalreporter')
tw = reporter._tw
@ -117,7 +120,7 @@ def showhelp(config):
if type is None:
type = "string"
spec = "%s (%s)" % (name, type)
line = " %-24s %s" %(spec, help)
line = " %-24s %s" % (spec, help)
tw.line(line[:tw.fullwidth])
tw.line()
@ -146,6 +149,7 @@ conftest_options = [
('pytest_plugins', 'list of plugin names to load'),
]
def getpluginversioninfo(config):
lines = []
plugininfo = config.pluginmanager.list_plugin_distinfo()
@ -157,11 +161,12 @@ def getpluginversioninfo(config):
lines.append(" " + content)
return lines
def pytest_report_header(config):
lines = []
if config.option.debug or config.option.traceconfig:
lines.append("using: pytest-%s pylib-%s" %
(pytest.__version__,py.__version__))
(pytest.__version__, py.__version__))
verinfo = getpluginversioninfo(config)
if verinfo:
@ -175,5 +180,5 @@ def pytest_report_header(config):
r = plugin.__file__
else:
r = repr(plugin)
lines.append(" %-20s: %s" %(name, r))
lines.append(" %-20s: %s" % (name, r))
return lines

View File

@ -1,6 +1,6 @@
""" hook specifications for pytest plugins, invoked from main.py and builtin plugins. """
from _pytest._pluggy import HookspecMarker
from pluggy import HookspecMarker
hookspec = HookspecMarker("pytest")
@ -8,24 +8,44 @@ hookspec = HookspecMarker("pytest")
# Initialization hooks called for every plugin
# -------------------------------------------------------------------------
@hookspec(historic=True)
def pytest_addhooks(pluginmanager):
"""called at plugin registration time to allow adding new hooks via a call to
pluginmanager.add_hookspecs(module_or_class, prefix)."""
``pluginmanager.add_hookspecs(module_or_class, prefix)``.
:param _pytest.config.PytestPluginManager pluginmanager: pytest plugin manager
.. note::
This hook is incompatible with ``hookwrapper=True``.
"""
@hookspec(historic=True)
def pytest_namespace():
"""
DEPRECATED: this hook causes direct monkeypatching on pytest, its use is strongly discouraged
(**Deprecated**) this hook causes direct monkeypatching on pytest, its use is strongly discouraged
return dict of name->object to be made globally available in
the pytest namespace. This hook is called at plugin registration
time.
the pytest namespace.
This hook is called at plugin registration time.
.. note::
This hook is incompatible with ``hookwrapper=True``.
"""
@hookspec(historic=True)
def pytest_plugin_registered(plugin, manager):
""" a new pytest plugin got registered. """
""" a new pytest plugin got registered.
:param plugin: the plugin module or instance
:param _pytest.config.PytestPluginManager manager: pytest plugin manager
.. note::
This hook is incompatible with ``hookwrapper=True``.
"""
@hookspec(historic=True)
@ -39,7 +59,7 @@ def pytest_addoption(parser):
files situated at the tests root directory due to how pytest
:ref:`discovers plugins during startup <pluginorder>`.
:arg parser: To add command line options, call
:arg _pytest.config.Parser parser: To add command line options, call
:py:func:`parser.addoption(...) <_pytest.config.Parser.addoption>`.
To add ini-file values call :py:func:`parser.addini(...)
<_pytest.config.Parser.addini>`.
@ -54,42 +74,89 @@ def pytest_addoption(parser):
a value read from an ini-style file.
The config object is passed around on many internal objects via the ``.config``
attribute or can be retrieved as the ``pytestconfig`` fixture or accessed
via (deprecated) ``pytest.config``.
attribute or can be retrieved as the ``pytestconfig`` fixture.
.. note::
This hook is incompatible with ``hookwrapper=True``.
"""
@hookspec(historic=True)
def pytest_configure(config):
""" called after command line options have been parsed
and all plugins and initial conftest files been loaded.
This hook is called for every plugin.
"""
Allows plugins and conftest files to perform initial configuration.
This hook is called for every plugin and initial conftest file
after command line options have been parsed.
After that, the hook is called for other conftest files as they are
imported.
.. note::
This hook is incompatible with ``hookwrapper=True``.
:arg _pytest.config.Config config: pytest config object
"""
# -------------------------------------------------------------------------
# Bootstrapping hooks called for plugins registered early enough:
# internal and 3rd party plugins as well as directly
# discoverable conftest.py local plugins.
# internal and 3rd party plugins.
# -------------------------------------------------------------------------
@hookspec(firstresult=True)
def pytest_cmdline_parse(pluginmanager, args):
"""return initialized config object, parsing the specified args.
Stops at first non-None result, see :ref:`firstresult` """
Stops at first non-None result, see :ref:`firstresult`
.. note::
This hook will not be called for ``conftest.py`` files, only for setuptools plugins.
:param _pytest.config.PytestPluginManager pluginmanager: pytest plugin manager
:param list[str] args: list of arguments passed on the command line
"""
def pytest_cmdline_preparse(config, args):
"""(deprecated) modify command line arguments before option parsing. """
"""(**Deprecated**) modify command line arguments before option parsing.
This hook is considered deprecated and will be removed in a future pytest version. Consider
using :func:`pytest_load_initial_conftests` instead.
.. note::
This hook will not be called for ``conftest.py`` files, only for setuptools plugins.
:param _pytest.config.Config config: pytest config object
:param list[str] args: list of arguments passed on the command line
"""
@hookspec(firstresult=True)
def pytest_cmdline_main(config):
""" called for performing the main command line action. The default
implementation will invoke the configure hooks and runtest_mainloop.
Stops at first non-None result, see :ref:`firstresult` """
.. note::
This hook will not be called for ``conftest.py`` files, only for setuptools plugins.
Stops at first non-None result, see :ref:`firstresult`
:param _pytest.config.Config config: pytest config object
"""
def pytest_load_initial_conftests(early_config, parser, args):
""" implements the loading of initial conftest files ahead
of command line option parsing. """
of command line option parsing.
.. note::
This hook will not be called for ``conftest.py`` files, only for setuptools plugins.
:param _pytest.config.Config early_config: pytest config object
:param list[str] args: list of arguments passed on the command line
:param _pytest.config.Parser parser: to add command line options
"""
# -------------------------------------------------------------------------
@ -98,16 +165,30 @@ def pytest_load_initial_conftests(early_config, parser, args):
@hookspec(firstresult=True)
def pytest_collection(session):
""" perform the collection protocol for the given session.
"""Perform the collection protocol for the given session.
Stops at first non-None result, see :ref:`firstresult`.
:param _pytest.main.Session session: the pytest session object
"""
Stops at first non-None result, see :ref:`firstresult` """
def pytest_collection_modifyitems(session, config, items):
""" called after collection has been performed, may filter or re-order
the items in-place."""
the items in-place.
:param _pytest.main.Session session: the pytest session object
:param _pytest.config.Config config: pytest config object
:param List[_pytest.nodes.Item] items: list of item objects
"""
def pytest_collection_finish(session):
""" called after collection has been performed and modified. """
""" called after collection has been performed and modified.
:param _pytest.main.Session session: the pytest session object
"""
@hookspec(firstresult=True)
def pytest_ignore_collect(path, config):
@ -116,31 +197,48 @@ def pytest_ignore_collect(path, config):
more specific hooks.
Stops at first non-None result, see :ref:`firstresult`
:param str path: the path to analyze
:param _pytest.config.Config config: pytest config object
"""
@hookspec(firstresult=True)
def pytest_collect_directory(path, parent):
""" called before traversing a directory for collection files.
Stops at first non-None result, see :ref:`firstresult` """
Stops at first non-None result, see :ref:`firstresult`
:param str path: the path to analyze
"""
def pytest_collect_file(path, parent):
""" return collection Node or None for the given path. Any new node
needs to have the specified ``parent`` as a parent."""
needs to have the specified ``parent`` as a parent.
:param str path: the path to collect
"""
# logging hooks for collection
def pytest_collectstart(collector):
""" collector starts collecting. """
def pytest_itemcollected(item):
""" we just collected a test item. """
def pytest_collectreport(report):
""" collector finished collecting. """
def pytest_deselected(items):
""" called for test items deselected by keyword. """
@hookspec(firstresult=True)
def pytest_make_collect_report(collector):
""" perform ``collector.collect()`` and return a CollectReport.
@ -151,6 +249,7 @@ def pytest_make_collect_report(collector):
# Python test function related hooks
# -------------------------------------------------------------------------
@hookspec(firstresult=True)
def pytest_pycollect_makemodule(path, parent):
""" return a Module collector or None for the given path.
@ -160,42 +259,57 @@ def pytest_pycollect_makemodule(path, parent):
Stops at first non-None result, see :ref:`firstresult` """
@hookspec(firstresult=True)
def pytest_pycollect_makeitem(collector, name, obj):
""" return custom item/collector for a python object in a module, or None.
Stops at first non-None result, see :ref:`firstresult` """
@hookspec(firstresult=True)
def pytest_pyfunc_call(pyfuncitem):
""" call underlying test function.
Stops at first non-None result, see :ref:`firstresult` """
def pytest_generate_tests(metafunc):
""" generate (multiple) parametrized calls to a test function."""
@hookspec(firstresult=True)
def pytest_make_parametrize_id(config, val, argname):
"""Return a user-friendly string representation of the given ``val`` that will be used
by @pytest.mark.parametrize calls. Return None if the hook doesn't know about ``val``.
The parameter name is available as ``argname``, if required.
Stops at first non-None result, see :ref:`firstresult` """
Stops at first non-None result, see :ref:`firstresult`
:param _pytest.config.Config config: pytest config object
:param val: the parametrized value
:param str argname: the automatic parameter name produced by pytest
"""
# -------------------------------------------------------------------------
# generic runtest related hooks
# -------------------------------------------------------------------------
@hookspec(firstresult=True)
def pytest_runtestloop(session):
""" called for performing the main runtest loop
(after collection finished).
Stops at first non-None result, see :ref:`firstresult` """
Stops at first non-None result, see :ref:`firstresult`
:param _pytest.main.Session session: the pytest session object
"""
def pytest_itemstart(item, node):
""" (deprecated, use pytest_runtest_logstart). """
"""(**Deprecated**) use pytest_runtest_logstart. """
@hookspec(firstresult=True)
def pytest_runtest_protocol(item, nextitem):
@ -214,15 +328,37 @@ def pytest_runtest_protocol(item, nextitem):
Stops at first non-None result, see :ref:`firstresult` """
def pytest_runtest_logstart(nodeid, location):
""" signal the start of running a single test item. """
""" signal the start of running a single test item.
This hook will be called **before** :func:`pytest_runtest_setup`, :func:`pytest_runtest_call` and
:func:`pytest_runtest_teardown` hooks.
:param str nodeid: full id of the item
:param location: a triple of ``(filename, linenum, testname)``
"""
def pytest_runtest_logfinish(nodeid, location):
""" signal the complete finish of running a single test item.
This hook will be called **after** :func:`pytest_runtest_setup`, :func:`pytest_runtest_call` and
:func:`pytest_runtest_teardown` hooks.
:param str nodeid: full id of the item
:param location: a triple of ``(filename, linenum, testname)``
"""
def pytest_runtest_setup(item):
""" called before ``pytest_runtest_call(item)``. """
def pytest_runtest_call(item):
""" called to execute the test ``item``. """
def pytest_runtest_teardown(item, nextitem):
""" called after ``pytest_runtest_call``.
@ -232,6 +368,7 @@ def pytest_runtest_teardown(item, nextitem):
so that nextitem only needs to call setup-functions.
"""
@hookspec(firstresult=True)
def pytest_runtest_makereport(item, call):
""" return a :py:class:`_pytest.runner.TestReport` object
@ -240,6 +377,7 @@ def pytest_runtest_makereport(item, call):
Stops at first non-None result, see :ref:`firstresult` """
def pytest_runtest_logreport(report):
""" process a test setup/call/teardown report relating to
the respective phase of executing a test. """
@ -248,13 +386,23 @@ def pytest_runtest_logreport(report):
# Fixture related hooks
# -------------------------------------------------------------------------
@hookspec(firstresult=True)
def pytest_fixture_setup(fixturedef, request):
""" performs fixture setup execution.
Stops at first non-None result, see :ref:`firstresult` """
:return: The return value of the call to the fixture function
def pytest_fixture_post_finalizer(fixturedef):
Stops at first non-None result, see :ref:`firstresult`
.. note::
If the fixture function returns None, other implementations of
this hook function will continue to be called, according to the
behavior of the :ref:`firstresult` option.
"""
def pytest_fixture_post_finalizer(fixturedef, request):
""" called after fixture teardown, but before the cache is cleared so
the fixture result cache ``fixturedef.cached_result`` can
still be accessed."""
@ -263,14 +411,28 @@ def pytest_fixture_post_finalizer(fixturedef):
# test session related hooks
# -------------------------------------------------------------------------
def pytest_sessionstart(session):
""" before session.main() is called. """
""" called after the ``Session`` object has been created and before performing collection
and entering the run test loop.
:param _pytest.main.Session session: the pytest session object
"""
def pytest_sessionfinish(session, exitstatus):
""" whole test run finishes. """
""" called after whole test run finished, right before returning the exit status to the system.
:param _pytest.main.Session session: the pytest session object
:param int exitstatus: the status which pytest will return to the system
"""
def pytest_unconfigure(config):
""" called before test process is exited. """
""" called before test process is exited.
:param _pytest.config.Config config: pytest config object
"""
# -------------------------------------------------------------------------
@ -284,14 +446,20 @@ def pytest_assertrepr_compare(config, op, left, right):
of strings. The strings will be joined by newlines but any newlines
*in* a string will be escaped. Note that all but the first line will
be indented slightly, the intention is for the first line to be a summary.
:param _pytest.config.Config config: pytest config object
"""
# -------------------------------------------------------------------------
# hooks for influencing reporting (invoked from _pytest_terminal)
# -------------------------------------------------------------------------
def pytest_report_header(config, startdir):
""" return a string to be displayed as header info for terminal reporting.
""" return a string or list of strings to be displayed as header info for terminal reporting.
:param _pytest.config.Config config: pytest config object
:param startdir: py.path object with the starting dir
.. note::
@ -300,26 +468,54 @@ def pytest_report_header(config, startdir):
:ref:`discovers plugins during startup <pluginorder>`.
"""
def pytest_report_collectionfinish(config, startdir, items):
"""
.. versionadded:: 3.2
return a string or list of strings to be displayed after collection has finished successfully.
This strings will be displayed after the standard "collected X items" message.
:param _pytest.config.Config config: pytest config object
:param startdir: py.path object with the starting dir
:param items: list of pytest items that are going to be executed; this list should not be modified.
"""
@hookspec(firstresult=True)
def pytest_report_teststatus(report):
""" return result-category, shortletter and verbose word for reporting.
Stops at first non-None result, see :ref:`firstresult` """
def pytest_terminal_summary(terminalreporter, exitstatus):
""" add additional section in terminal summary reporting. """
"""Add a section to terminal summary reporting.
:param _pytest.terminal.TerminalReporter terminalreporter: the internal terminal reporter object
:param int exitstatus: the exit status that will be reported back to the OS
.. versionadded:: 3.5
The ``config`` parameter.
"""
@hookspec(historic=True)
def pytest_logwarning(message, code, nodeid, fslocation):
""" process a warning specified by a message, a code string,
a nodeid and fslocation (both of which may be None
if the warning is not tied to a partilar node/location)."""
if the warning is not tied to a particular node/location).
.. note::
This hook is incompatible with ``hookwrapper=True``.
"""
# -------------------------------------------------------------------------
# doctest hooks
# -------------------------------------------------------------------------
@hookspec(firstresult=True)
def pytest_doctest_prepare_content(content):
""" return processed content for a given doctest
@ -330,12 +526,15 @@ def pytest_doctest_prepare_content(content):
# error handling and internal debugging hooks
# -------------------------------------------------------------------------
def pytest_internalerror(excrepr, excinfo):
""" called for internal errors. """
def pytest_keyboard_interrupt(excinfo):
""" called for keyboard interrupt. """
def pytest_exception_interact(node, call, report):
"""called when an exception was raised which can potentially be
interactively handled.
@ -344,10 +543,10 @@ def pytest_exception_interact(node, call, report):
that is not an internal exception like ``skip.Exception``.
"""
def pytest_enter_pdb(config):
""" called upon pdb.set_trace(), can be used by plugins to take special
action just before the python debugger enters in interactive mode.
:arg config: pytest config object
:type config: _pytest.config.Config
:param _pytest.config.Config config: pytest config object
"""

View File

@ -1,254 +0,0 @@
Sorting per-resource
-----------------------------
for any given set of items:
- collect items per session-scoped parametrized funcarg
- re-order until items no parametrizations are mixed
examples:
test()
test1(s1)
test1(s2)
test2()
test3(s1)
test3(s2)
gets sorted to:
test()
test2()
test1(s1)
test3(s1)
test1(s2)
test3(s2)
the new @setup functions
--------------------------------------
Consider a given @setup-marked function::
@pytest.mark.setup(maxscope=SCOPE)
def mysetup(request, arg1, arg2, ...)
...
request.addfinalizer(fin)
...
then FUNCARGSET denotes the set of (arg1, arg2, ...) funcargs and
all of its dependent funcargs. The mysetup function will execute
for any matching test item once per scope.
The scope is determined as the minimum scope of all scopes of the args
in FUNCARGSET and the given "maxscope".
If mysetup has been called and no finalizers have been called it is
called "active".
Furthermore the following rules apply:
- if an arg value in FUNCARGSET is about to be torn down, the
mysetup-registered finalizers will execute as well.
- There will never be two active mysetup invocations.
Example 1, session scope::
@pytest.mark.funcarg(scope="session", params=[1,2])
def db(request):
request.addfinalizer(db_finalize)
@pytest.mark.setup
def mysetup(request, db):
request.addfinalizer(mysetup_finalize)
...
And a given test module:
def test_something():
...
def test_otherthing():
pass
Here is what happens::
db(request) executes with request.param == 1
mysetup(request, db) executes
test_something() executes
test_otherthing() executes
mysetup_finalize() executes
db_finalize() executes
db(request) executes with request.param == 2
mysetup(request, db) executes
test_something() executes
test_otherthing() executes
mysetup_finalize() executes
db_finalize() executes
Example 2, session/function scope::
@pytest.mark.funcarg(scope="session", params=[1,2])
def db(request):
request.addfinalizer(db_finalize)
@pytest.mark.setup(scope="function")
def mysetup(request, db):
...
request.addfinalizer(mysetup_finalize)
...
And a given test module:
def test_something():
...
def test_otherthing():
pass
Here is what happens::
db(request) executes with request.param == 1
mysetup(request, db) executes
test_something() executes
mysetup_finalize() executes
mysetup(request, db) executes
test_otherthing() executes
mysetup_finalize() executes
db_finalize() executes
db(request) executes with request.param == 2
mysetup(request, db) executes
test_something() executes
mysetup_finalize() executes
mysetup(request, db) executes
test_otherthing() executes
mysetup_finalize() executes
db_finalize() executes
Example 3 - funcargs session-mix
----------------------------------------
Similar with funcargs, an example::
@pytest.mark.funcarg(scope="session", params=[1,2])
def db(request):
request.addfinalizer(db_finalize)
@pytest.mark.funcarg(scope="function")
def table(request, db):
...
request.addfinalizer(table_finalize)
...
And a given test module:
def test_something(table):
...
def test_otherthing(table):
pass
def test_thirdthing():
pass
Here is what happens::
db(request) executes with param == 1
table(request, db)
test_something(table)
table_finalize()
table(request, db)
test_otherthing(table)
table_finalize()
db_finalize
db(request) executes with param == 2
table(request, db)
test_something(table)
table_finalize()
table(request, db)
test_otherthing(table)
table_finalize()
db_finalize
test_thirdthing()
Data structures
--------------------
pytest internally maintains a dict of active funcargs with cache, param,
finalizer, (scopeitem?) information:
active_funcargs = dict()
if a parametrized "db" is activated:
active_funcargs["db"] = FuncargInfo(dbvalue, paramindex,
FuncargFinalize(...), scopeitem)
if a test is torn down and the next test requires a differently
parametrized "db":
for argname in item.callspec.params:
if argname in active_funcargs:
funcarginfo = active_funcargs[argname]
if funcarginfo.param != item.callspec.params[argname]:
funcarginfo.callfinalizer()
del node2funcarg[funcarginfo.scopeitem]
del active_funcargs[argname]
nodes_to_be_torn_down = ...
for node in nodes_to_be_torn_down:
if node in node2funcarg:
argname = node2funcarg[node]
active_funcargs[argname].callfinalizer()
del node2funcarg[node]
del active_funcargs[argname]
if a test is setup requiring a "db" funcarg:
if "db" in active_funcargs:
return active_funcargs["db"][0]
funcarginfo = setup_funcarg()
active_funcargs["db"] = funcarginfo
node2funcarg[funcarginfo.scopeitem] = "db"
Implementation plan for resources
------------------------------------------
1. Revert FuncargRequest to the old form, unmerge item/request
(done)
2. make funcarg factories be discovered at collection time
3. Introduce funcarg marker
4. Introduce funcarg scope parameter
5. Introduce funcarg parametrize parameter
6. make setup functions be discovered at collection time
7. (Introduce a pytest_fixture_protocol/setup_funcargs hook)
methods and data structures
--------------------------------
A FuncarcManager holds all information about funcarg definitions
including parametrization and scope definitions. It implements
a pytest_generate_tests hook which performs parametrization as appropriate.
as a simple example, let's consider a tree where a test function requires
a "abc" funcarg and its factory defines it as parametrized and scoped
for Modules. When collections hits the function item, it creates
the metafunc object, and calls funcargdb.pytest_generate_tests(metafunc)
which looks up available funcarg factories and their scope and parametrization.
This information is equivalent to what can be provided today directly
at the function site and it should thus be relatively straight forward
to implement the additional way of defining parametrization/scoping.
conftest loading:
each funcarg-factory will populate the session.funcargmanager
When a test item is collected, it grows a dictionary
(funcargname2factorycalllist). A factory lookup is performed
for each required funcarg. The resulting factory call is stored
with the item. If a function is parametrized multiple items are
created with respective factory calls. Else if a factory is parametrized
multiple items and calls to the factory function are created as well.
At setup time, an item populates a funcargs mapping, mapping names
to values. If a value is funcarg factories are queried for a given item
test functions and setup functions are put in a class
which looks up required funcarg factories.

View File

@ -17,6 +17,7 @@ import re
import sys
import time
import pytest
from _pytest import nodes
from _pytest.config import filename_arg
# Python 2.X and 3.X compatibility
@ -84,6 +85,9 @@ class _NodeReporter(object):
def add_property(self, name, value):
self.properties.append((str(name), bin_xml_escape(value)))
def add_attribute(self, name, value):
self.attrs[str(name)] = bin_xml_escape(value)
def make_properties_node(self):
"""Return a Junit node containing custom properties, if any.
"""
@ -97,6 +101,7 @@ class _NodeReporter(object):
def record_testreport(self, testreport):
assert not self.testcase
names = mangle_test_address(testreport.nodeid)
existing_attrs = self.attrs
classnames = names[:-1]
if self.xml.prefix:
classnames.insert(0, self.xml.prefix)
@ -110,6 +115,7 @@ class _NodeReporter(object):
if hasattr(testreport, "url"):
attrs["url"] = testreport.url
self.attrs = attrs
self.attrs.update(existing_attrs) # restore any user-defined attributes
def to_xml(self):
testcase = Junit.testcase(time=self.duration, **self.attrs)
@ -124,10 +130,47 @@ class _NodeReporter(object):
self.append(node)
def write_captured_output(self, report):
for capname in ('out', 'err'):
content = getattr(report, 'capstd' + capname)
content_out = report.capstdout
content_log = report.caplog
content_err = report.capstderr
if content_log or content_out:
if content_log and self.xml.logging == 'system-out':
if content_out:
# syncing stdout and the log-output is not done yet. It's
# probably not worth the effort. Therefore, first the captured
# stdout is shown and then the captured logs.
content = '\n'.join([
' Captured Stdout '.center(80, '-'),
content_out,
'',
' Captured Log '.center(80, '-'),
content_log])
else:
content = content_log
else:
content = content_out
if content:
tag = getattr(Junit, 'system-' + capname)
tag = getattr(Junit, 'system-out')
self.append(tag(bin_xml_escape(content)))
if content_log or content_err:
if content_log and self.xml.logging == 'system-err':
if content_err:
content = '\n'.join([
' Captured Stderr '.center(80, '-'),
content_err,
'',
' Captured Log '.center(80, '-'),
content_log])
else:
content = content_log
else:
content = content_err
if content:
tag = getattr(Junit, 'system-err')
self.append(tag(bin_xml_escape(content)))
def append_pass(self, report):
@ -190,24 +233,56 @@ class _NodeReporter(object):
@pytest.fixture
def record_xml_property(request):
"""Add extra xml properties to the tag for the calling test.
def record_property(request):
"""Add an extra properties the calling test.
User properties become part of the test report and are available to the
configured reporters, like JUnit XML.
The fixture is callable with ``(name, value)``, with value being automatically
xml-encoded.
Example::
def test_function(record_property):
record_property("example_key", 1)
"""
def append_property(name, value):
request.node.user_properties.append((name, value))
return append_property
@pytest.fixture
def record_xml_property(record_property):
"""(Deprecated) use record_property."""
import warnings
from _pytest import deprecated
warnings.warn(
deprecated.RECORD_XML_PROPERTY,
DeprecationWarning,
stacklevel=2
)
return record_property
@pytest.fixture
def record_xml_attribute(request):
"""Add extra xml attributes to the tag for the calling test.
The fixture is callable with ``(name, value)``, with value being
automatically xml-encoded
"""
request.node.warn(
code='C3',
message='record_xml_property is an experimental feature',
message='record_xml_attribute is an experimental feature',
)
xml = getattr(request.config, "_xml", None)
if xml is not None:
node_reporter = xml.node_reporter(request.node.nodeid)
return node_reporter.add_property
return node_reporter.add_attribute
else:
def add_property_noop(name, value):
def add_attr_noop(name, value):
pass
return add_property_noop
return add_attr_noop
def pytest_addoption(parser):
@ -227,13 +302,18 @@ def pytest_addoption(parser):
default=None,
help="prepend prefix to classnames in junit-xml output")
parser.addini("junit_suite_name", "Test suite name for JUnit report", default="pytest")
parser.addini("junit_logging", "Write captured log messages to JUnit report: "
"one of no|system-out|system-err",
default="no") # choices=['no', 'stdout', 'stderr'])
def pytest_configure(config):
xmlpath = config.option.xmlpath
# prevent opening xmllog on slave nodes (xdist)
if xmlpath and not hasattr(config, 'slaveinput'):
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini("junit_suite_name"))
config._xml = LogXML(xmlpath, config.option.junitprefix,
config.getini("junit_suite_name"),
config.getini("junit_logging"))
config.pluginmanager.register(config._xml)
@ -252,7 +332,7 @@ def mangle_test_address(address):
except ValueError:
pass
# convert file path to dotted path
names[0] = names[0].replace("/", '.')
names[0] = names[0].replace(nodes.SEP, '.')
names[0] = _py_ext_re.sub("", names[0])
# put any params back
names[-1] += possible_open_bracket + params
@ -260,11 +340,12 @@ def mangle_test_address(address):
class LogXML(object):
def __init__(self, logfile, prefix, suite_name="pytest"):
def __init__(self, logfile, prefix, suite_name="pytest", logging="no"):
logfile = os.path.expanduser(os.path.expandvars(logfile))
self.logfile = os.path.normpath(os.path.abspath(logfile))
self.prefix = prefix
self.suite_name = suite_name
self.logging = logging
self.stats = dict.fromkeys([
'error',
'passed',
@ -372,14 +453,18 @@ class LogXML(object):
if report.when == "teardown":
reporter = self._opentestcase(report)
reporter.write_captured_output(report)
for propname, propvalue in report.user_properties:
reporter.add_property(propname, propvalue)
self.finalize(report)
report_wid = getattr(report, "worker_id", None)
report_ii = getattr(report, "item_index", None)
close_report = next(
(rep for rep in self.open_reports
if (rep.nodeid == report.nodeid and
getattr(rep, "item_index", None) == report_ii and
getattr(rep, "worker_id", None) == report_wid
getattr(rep, "item_index", None) == report_ii and
getattr(rep, "worker_id", None) == report_wid
)
), None)
if close_report:
@ -444,9 +529,9 @@ class LogXML(object):
"""
if self.global_properties:
return Junit.properties(
[
Junit.property(name=name, value=value)
for name, value in self.global_properties
]
[
Junit.property(name=name, value=value)
for name, value in self.global_properties
]
)
return ''

522
_pytest/logging.py Normal file
View File

@ -0,0 +1,522 @@
""" Access and control log capturing. """
from __future__ import absolute_import, division, print_function
import logging
from contextlib import closing, contextmanager
import re
import six
from _pytest.config import create_terminal_writer
import pytest
import py
DEFAULT_LOG_FORMAT = '%(filename)-25s %(lineno)4d %(levelname)-8s %(message)s'
DEFAULT_LOG_DATE_FORMAT = '%H:%M:%S'
class ColoredLevelFormatter(logging.Formatter):
"""
Colorize the %(levelname)..s part of the log format passed to __init__.
"""
LOGLEVEL_COLOROPTS = {
logging.CRITICAL: {'red'},
logging.ERROR: {'red', 'bold'},
logging.WARNING: {'yellow'},
logging.WARN: {'yellow'},
logging.INFO: {'green'},
logging.DEBUG: {'purple'},
logging.NOTSET: set(),
}
LEVELNAME_FMT_REGEX = re.compile(r'%\(levelname\)([+-]?\d*s)')
def __init__(self, terminalwriter, *args, **kwargs):
super(ColoredLevelFormatter, self).__init__(
*args, **kwargs)
if six.PY2:
self._original_fmt = self._fmt
else:
self._original_fmt = self._style._fmt
self._level_to_fmt_mapping = {}
levelname_fmt_match = self.LEVELNAME_FMT_REGEX.search(self._fmt)
if not levelname_fmt_match:
return
levelname_fmt = levelname_fmt_match.group()
for level, color_opts in self.LOGLEVEL_COLOROPTS.items():
formatted_levelname = levelname_fmt % {
'levelname': logging.getLevelName(level)}
# add ANSI escape sequences around the formatted levelname
color_kwargs = {name: True for name in color_opts}
colorized_formatted_levelname = terminalwriter.markup(
formatted_levelname, **color_kwargs)
self._level_to_fmt_mapping[level] = self.LEVELNAME_FMT_REGEX.sub(
colorized_formatted_levelname,
self._fmt)
def format(self, record):
fmt = self._level_to_fmt_mapping.get(
record.levelno, self._original_fmt)
if six.PY2:
self._fmt = fmt
else:
self._style._fmt = fmt
return super(ColoredLevelFormatter, self).format(record)
def get_option_ini(config, *names):
for name in names:
ret = config.getoption(name) # 'default' arg won't work as expected
if ret is None:
ret = config.getini(name)
if ret:
return ret
def pytest_addoption(parser):
"""Add options to control log capturing."""
group = parser.getgroup('logging')
def add_option_ini(option, dest, default=None, type=None, **kwargs):
parser.addini(dest, default=default, type=type,
help='default value for ' + option)
group.addoption(option, dest=dest, **kwargs)
add_option_ini(
'--no-print-logs',
dest='log_print', action='store_const', const=False, default=True,
type='bool',
help='disable printing caught logs on failed tests.')
add_option_ini(
'--log-level',
dest='log_level', default=None,
help='logging level used by the logging module')
add_option_ini(
'--log-format',
dest='log_format', default=DEFAULT_LOG_FORMAT,
help='log format as used by the logging module.')
add_option_ini(
'--log-date-format',
dest='log_date_format', default=DEFAULT_LOG_DATE_FORMAT,
help='log date format as used by the logging module.')
parser.addini(
'log_cli', default=False, type='bool',
help='enable log display during test run (also known as "live logging").')
add_option_ini(
'--log-cli-level',
dest='log_cli_level', default=None,
help='cli logging level.')
add_option_ini(
'--log-cli-format',
dest='log_cli_format', default=None,
help='log format as used by the logging module.')
add_option_ini(
'--log-cli-date-format',
dest='log_cli_date_format', default=None,
help='log date format as used by the logging module.')
add_option_ini(
'--log-file',
dest='log_file', default=None,
help='path to a file when logging will be written to.')
add_option_ini(
'--log-file-level',
dest='log_file_level', default=None,
help='log file logging level.')
add_option_ini(
'--log-file-format',
dest='log_file_format', default=DEFAULT_LOG_FORMAT,
help='log format as used by the logging module.')
add_option_ini(
'--log-file-date-format',
dest='log_file_date_format', default=DEFAULT_LOG_DATE_FORMAT,
help='log date format as used by the logging module.')
@contextmanager
def catching_logs(handler, formatter=None, level=None):
"""Context manager that prepares the whole logging machinery properly."""
root_logger = logging.getLogger()
if formatter is not None:
handler.setFormatter(formatter)
if level is not None:
handler.setLevel(level)
# Adding the same handler twice would confuse logging system.
# Just don't do that.
add_new_handler = handler not in root_logger.handlers
if add_new_handler:
root_logger.addHandler(handler)
if level is not None:
orig_level = root_logger.level
root_logger.setLevel(min(orig_level, level))
try:
yield handler
finally:
if level is not None:
root_logger.setLevel(orig_level)
if add_new_handler:
root_logger.removeHandler(handler)
class LogCaptureHandler(logging.StreamHandler):
"""A logging handler that stores log records and the log text."""
def __init__(self):
"""Creates a new log handler."""
logging.StreamHandler.__init__(self, py.io.TextIO())
self.records = []
def emit(self, record):
"""Keep the log records in a list in addition to the log text."""
self.records.append(record)
logging.StreamHandler.emit(self, record)
def reset(self):
self.records = []
self.stream = py.io.TextIO()
class LogCaptureFixture(object):
"""Provides access and control of log capturing."""
def __init__(self, item):
"""Creates a new funcarg."""
self._item = item
self._initial_log_levels = {} # type: Dict[str, int] # dict of log name -> log level
def _finalize(self):
"""Finalizes the fixture.
This restores the log levels changed by :meth:`set_level`.
"""
# restore log levels
for logger_name, level in self._initial_log_levels.items():
logger = logging.getLogger(logger_name)
logger.setLevel(level)
@property
def handler(self):
"""
:rtype: LogCaptureHandler
"""
return self._item.catch_log_handler
def get_records(self, when):
"""
Get the logging records for one of the possible test phases.
:param str when:
Which test phase to obtain the records from. Valid values are: "setup", "call" and "teardown".
:rtype: List[logging.LogRecord]
:return: the list of captured records at the given stage
.. versionadded:: 3.4
"""
handler = self._item.catch_log_handlers.get(when)
if handler:
return handler.records
else:
return []
@property
def text(self):
"""Returns the log text."""
return self.handler.stream.getvalue()
@property
def records(self):
"""Returns the list of log records."""
return self.handler.records
@property
def record_tuples(self):
"""Returns a list of a striped down version of log records intended
for use in assertion comparison.
The format of the tuple is:
(logger_name, log_level, message)
"""
return [(r.name, r.levelno, r.getMessage()) for r in self.records]
def clear(self):
"""Reset the list of log records and the captured log text."""
self.handler.reset()
def set_level(self, level, logger=None):
"""Sets the level for capturing of logs. The level will be restored to its previous value at the end of
the test.
:param int level: the logger to level.
:param str logger: the logger to update the level. If not given, the root logger level is updated.
.. versionchanged:: 3.4
The levels of the loggers changed by this function will be restored to their initial values at the
end of the test.
"""
logger_name = logger
logger = logging.getLogger(logger_name)
# save the original log-level to restore it during teardown
self._initial_log_levels.setdefault(logger_name, logger.level)
logger.setLevel(level)
@contextmanager
def at_level(self, level, logger=None):
"""Context manager that sets the level for capturing of logs. After the end of the 'with' statement the
level is restored to its original value.
:param int level: the logger to level.
:param str logger: the logger to update the level. If not given, the root logger level is updated.
"""
logger = logging.getLogger(logger)
orig_level = logger.level
logger.setLevel(level)
try:
yield
finally:
logger.setLevel(orig_level)
@pytest.fixture
def caplog(request):
"""Access and control log capturing.
Captured logs are available through the following methods::
* caplog.text() -> string containing formatted log output
* caplog.records() -> list of logging.LogRecord instances
* caplog.record_tuples() -> list of (logger_name, level, message) tuples
* caplog.clear() -> clear captured records and formatted log output string
"""
result = LogCaptureFixture(request.node)
yield result
result._finalize()
def get_actual_log_level(config, *setting_names):
"""Return the actual logging level."""
for setting_name in setting_names:
log_level = config.getoption(setting_name)
if log_level is None:
log_level = config.getini(setting_name)
if log_level:
break
else:
return
if isinstance(log_level, six.string_types):
log_level = log_level.upper()
try:
return int(getattr(logging, log_level, log_level))
except ValueError:
# Python logging does not recognise this as a logging level
raise pytest.UsageError(
"'{0}' is not recognized as a logging level name for "
"'{1}'. Please consider passing the "
"logging level num instead.".format(
log_level,
setting_name))
def pytest_configure(config):
config.pluginmanager.register(LoggingPlugin(config), 'logging-plugin')
@contextmanager
def _dummy_context_manager():
yield
class LoggingPlugin(object):
"""Attaches to the logging module and captures log messages for each test.
"""
def __init__(self, config):
"""Creates a new plugin to capture log messages.
The formatter can be safely shared across all handlers so
create a single one for the entire test session here.
"""
self._config = config
# enable verbose output automatically if live logging is enabled
if self._log_cli_enabled() and not config.getoption('verbose'):
# sanity check: terminal reporter should not have been loaded at this point
assert self._config.pluginmanager.get_plugin('terminalreporter') is None
config.option.verbose = 1
self.print_logs = get_option_ini(config, 'log_print')
self.formatter = logging.Formatter(get_option_ini(config, 'log_format'),
get_option_ini(config, 'log_date_format'))
self.log_level = get_actual_log_level(config, 'log_level')
log_file = get_option_ini(config, 'log_file')
if log_file:
self.log_file_level = get_actual_log_level(config, 'log_file_level')
log_file_format = get_option_ini(config, 'log_file_format', 'log_format')
log_file_date_format = get_option_ini(config, 'log_file_date_format', 'log_date_format')
# Each pytest runtests session will write to a clean logfile
self.log_file_handler = logging.FileHandler(log_file, mode='w')
log_file_formatter = logging.Formatter(log_file_format, datefmt=log_file_date_format)
self.log_file_handler.setFormatter(log_file_formatter)
else:
self.log_file_handler = None
# initialized during pytest_runtestloop
self.log_cli_handler = None
def _log_cli_enabled(self):
"""Return True if log_cli should be considered enabled, either explicitly
or because --log-cli-level was given in the command-line.
"""
return self._config.getoption('--log-cli-level') is not None or \
self._config.getini('log_cli')
@contextmanager
def _runtest_for(self, item, when):
"""Implements the internals of pytest_runtest_xxx() hook."""
with catching_logs(LogCaptureHandler(),
formatter=self.formatter, level=self.log_level) as log_handler:
if self.log_cli_handler:
self.log_cli_handler.set_when(when)
if item is None:
yield # run the test
return
if not hasattr(item, 'catch_log_handlers'):
item.catch_log_handlers = {}
item.catch_log_handlers[when] = log_handler
item.catch_log_handler = log_handler
try:
yield # run test
finally:
del item.catch_log_handler
if when == 'teardown':
del item.catch_log_handlers
if self.print_logs:
# Add a captured log section to the report.
log = log_handler.stream.getvalue().strip()
item.add_report_section(when, 'log', log)
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_setup(self, item):
with self._runtest_for(item, 'setup'):
yield
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(self, item):
with self._runtest_for(item, 'call'):
yield
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_teardown(self, item):
with self._runtest_for(item, 'teardown'):
yield
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_logstart(self):
if self.log_cli_handler:
self.log_cli_handler.reset()
with self._runtest_for(None, 'start'):
yield
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_logfinish(self):
with self._runtest_for(None, 'finish'):
yield
@pytest.hookimpl(hookwrapper=True)
def pytest_runtestloop(self, session):
"""Runs all collected test items."""
self._setup_cli_logging()
with self.live_logs_context:
if self.log_file_handler is not None:
with closing(self.log_file_handler):
with catching_logs(self.log_file_handler,
level=self.log_file_level):
yield # run all the tests
else:
yield # run all the tests
def _setup_cli_logging(self):
"""Sets up the handler and logger for the Live Logs feature, if enabled.
This must be done right before starting the loop so we can access the terminal reporter plugin.
"""
terminal_reporter = self._config.pluginmanager.get_plugin('terminalreporter')
if self._log_cli_enabled() and terminal_reporter is not None:
capture_manager = self._config.pluginmanager.get_plugin('capturemanager')
log_cli_handler = _LiveLoggingStreamHandler(terminal_reporter, capture_manager)
log_cli_format = get_option_ini(self._config, 'log_cli_format', 'log_format')
log_cli_date_format = get_option_ini(self._config, 'log_cli_date_format', 'log_date_format')
if self._config.option.color != 'no' and ColoredLevelFormatter.LEVELNAME_FMT_REGEX.search(log_cli_format):
log_cli_formatter = ColoredLevelFormatter(create_terminal_writer(self._config),
log_cli_format, datefmt=log_cli_date_format)
else:
log_cli_formatter = logging.Formatter(log_cli_format, datefmt=log_cli_date_format)
log_cli_level = get_actual_log_level(self._config, 'log_cli_level', 'log_level')
self.log_cli_handler = log_cli_handler
self.live_logs_context = catching_logs(log_cli_handler, formatter=log_cli_formatter, level=log_cli_level)
else:
self.live_logs_context = _dummy_context_manager()
class _LiveLoggingStreamHandler(logging.StreamHandler):
"""
Custom StreamHandler used by the live logging feature: it will write a newline before the first log message
in each test.
During live logging we must also explicitly disable stdout/stderr capturing otherwise it will get captured
and won't appear in the terminal.
"""
def __init__(self, terminal_reporter, capture_manager):
"""
:param _pytest.terminal.TerminalReporter terminal_reporter:
:param _pytest.capture.CaptureManager capture_manager:
"""
logging.StreamHandler.__init__(self, stream=terminal_reporter)
self.capture_manager = capture_manager
self.reset()
self.set_when(None)
self._test_outcome_written = False
def reset(self):
"""Reset the handler; should be called before the start of each test"""
self._first_record_emitted = False
def set_when(self, when):
"""Prepares for the given test phase (setup/call/teardown)"""
self._when = when
self._section_name_shown = False
if when == 'start':
self._test_outcome_written = False
def emit(self, record):
if self.capture_manager is not None:
self.capture_manager.suspend_global_capture()
try:
if not self._first_record_emitted:
self.stream.write('\n')
self._first_record_emitted = True
elif self._when in ('teardown', 'finish'):
if not self._test_outcome_written:
self._test_outcome_written = True
self.stream.write('\n')
if not self._section_name_shown and self._when:
self.stream.section('live log ' + self._when, sep='-', bold=True)
self._section_name_shown = True
logging.StreamHandler.emit(self, record)
finally:
if self.capture_manager is not None:
self.capture_manager.resume_global_capture()

View File

@ -1,22 +1,22 @@
""" core implementation of testing process: init, session, runtest loop. """
from __future__ import absolute_import, division, print_function
import contextlib
import functools
import os
import pkgutil
import six
import sys
import _pytest
from _pytest import nodes
import _pytest._code
import py
try:
from collections import MutableMapping as MappingMixin
except ImportError:
from UserDict import DictMixin as MappingMixin
from _pytest.config import directory_arg, UsageError, hookimpl
from _pytest.runner import collect_one_node, exit
from _pytest.outcomes import exit
from _pytest.runner import collect_one_node
tracebackcutdir = py.path.local(_pytest.__file__).dirpath()
# exitcodes for the command line
EXIT_OK = 0
@ -29,66 +29,68 @@ EXIT_NOTESTSCOLLECTED = 5
def pytest_addoption(parser):
parser.addini("norecursedirs", "directory patterns to avoid for recursion",
type="args", default=['.*', 'build', 'dist', 'CVS', '_darcs', '{arch}', '*.egg', 'venv'])
parser.addini("testpaths", "directories to search for tests when no files or directories are given in the command line.",
type="args", default=[])
#parser.addini("dirpatterns",
type="args", default=['.*', 'build', 'dist', 'CVS', '_darcs', '{arch}', '*.egg', 'venv'])
parser.addini("testpaths", "directories to search for tests when no files or directories are given in the "
"command line.",
type="args", default=[])
# parser.addini("dirpatterns",
# "patterns specifying possible locations of test files",
# type="linelist", default=["**/test_*.txt",
# "**/test_*.py", "**/*_test.py"]
#)
# )
group = parser.getgroup("general", "running and selection options")
group._addoption('-x', '--exitfirst', action="store_const",
dest="maxfail", const=1,
help="exit instantly on first error or failed test."),
dest="maxfail", const=1,
help="exit instantly on first error or failed test."),
group._addoption('--maxfail', metavar="num",
action="store", type=int, dest="maxfail", default=0,
help="exit after first num failures or errors.")
action="store", type=int, dest="maxfail", default=0,
help="exit after first num failures or errors.")
group._addoption('--strict', action="store_true",
help="run pytest in strict mode, warnings become errors.")
help="marks not registered in configuration file raise errors.")
group._addoption("-c", metavar="file", type=str, dest="inifilename",
help="load configuration from `file` instead of trying to locate one of the implicit configuration files.")
help="load configuration from `file` instead of trying to locate one of the implicit "
"configuration files.")
group._addoption("--continue-on-collection-errors", action="store_true",
default=False, dest="continue_on_collection_errors",
help="Force test execution even if collection errors occur.")
default=False, dest="continue_on_collection_errors",
help="Force test execution even if collection errors occur.")
group._addoption("--rootdir", action="store",
dest="rootdir",
help="Define root directory for tests. Can be relative path: 'root_dir', './root_dir', "
"'root_dir/another_dir/'; absolute path: '/home/user/root_dir'; path with variables: "
"'$HOME/root_dir'.")
group = parser.getgroup("collect", "collection")
group.addoption('--collectonly', '--collect-only', action="store_true",
help="only collect tests, don't execute them."),
help="only collect tests, don't execute them."),
group.addoption('--pyargs', action="store_true",
help="try to interpret all arguments as python packages.")
help="try to interpret all arguments as python packages.")
group.addoption("--ignore", action="append", metavar="path",
help="ignore path during collection (multi-allowed).")
help="ignore path during collection (multi-allowed).")
group.addoption("--deselect", action="append", metavar="nodeid_prefix",
help="deselect item during collection (multi-allowed).")
# when changing this to --conf-cut-dir, config.py Conftest.setinitial
# needs upgrading as well
group.addoption('--confcutdir', dest="confcutdir", default=None,
metavar="dir", type=functools.partial(directory_arg, optname="--confcutdir"),
help="only load conftest.py's relative to specified dir.")
metavar="dir", type=functools.partial(directory_arg, optname="--confcutdir"),
help="only load conftest.py's relative to specified dir.")
group.addoption('--noconftest', action="store_true",
dest="noconftest", default=False,
help="Don't load any conftest.py files.")
dest="noconftest", default=False,
help="Don't load any conftest.py files.")
group.addoption('--keepduplicates', '--keep-duplicates', action="store_true",
dest="keepduplicates", default=False,
help="Keep duplicate tests.")
dest="keepduplicates", default=False,
help="Keep duplicate tests.")
group.addoption('--collect-in-virtualenv', action='store_true',
dest='collect_in_virtualenv', default=False,
help="Don't ignore tests in a local virtualenv directory")
group = parser.getgroup("debugconfig",
"test session debugging and configuration")
"test session debugging and configuration")
group.addoption('--basetemp', dest="basetemp", default=None, metavar="dir",
help="base temporary directory for this test run.")
def pytest_namespace():
"""keeping this one works around a deeper startup issue in pytest
i tried to find it for a while but the amount of time turned unsustainable,
so i put a hack in to revisit later
"""
return {}
help="base temporary directory for this test run.")
def pytest_configure(config):
__import__('pytest').config = config # compatibiltiy
__import__('pytest').config = config # compatibiltiy
def wrap_session(config, doit):
@ -105,6 +107,8 @@ def wrap_session(config, doit):
session.exitstatus = doit(config, session) or 0
except UsageError:
raise
except Failed:
session.exitstatus = EXIT_TESTSFAILED
except KeyboardInterrupt:
excinfo = _pytest._code.ExceptionInfo()
if initstate < 2 and isinstance(excinfo.value, exit.Exception):
@ -112,7 +116,7 @@ def wrap_session(config, doit):
excinfo.typename, excinfo.value.msg))
config.hook.pytest_keyboard_interrupt(excinfo=excinfo)
session.exitstatus = EXIT_INTERRUPTED
except:
except: # noqa
excinfo = _pytest._code.ExceptionInfo()
config.notify_exception(excinfo, config.option)
session.exitstatus = EXIT_INTERNALERROR
@ -160,22 +164,38 @@ def pytest_runtestloop(session):
return True
for i, item in enumerate(session.items):
nextitem = session.items[i+1] if i+1 < len(session.items) else None
nextitem = session.items[i + 1] if i + 1 < len(session.items) else None
item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
if session.shouldfail:
raise session.Failed(session.shouldfail)
if session.shouldstop:
raise session.Interrupted(session.shouldstop)
return True
def _in_venv(path):
"""Attempts to detect if ``path`` is the root of a Virtual Environment by
checking for the existence of the appropriate activate script"""
bindir = path.join('Scripts' if sys.platform.startswith('win') else 'bin')
if not bindir.isdir():
return False
activates = ('activate', 'activate.csh', 'activate.fish',
'Activate', 'Activate.bat', 'Activate.ps1')
return any([fname.basename in activates for fname in bindir.listdir()])
def pytest_ignore_collect(path, config):
p = path.dirpath()
ignore_paths = config._getconftest_pathlist("collect_ignore", path=p)
ignore_paths = config._getconftest_pathlist("collect_ignore", path=path.dirpath())
ignore_paths = ignore_paths or []
excludeopt = config.getoption("ignore")
if excludeopt:
ignore_paths.extend([py.path.local(x) for x in excludeopt])
if path in ignore_paths:
if py.path.local(path) in ignore_paths:
return True
allow_in_venv = config.getoption("collect_in_virtualenv")
if _in_venv(path) and not allow_in_venv:
return True
# Skip duplicate paths.
@ -190,7 +210,65 @@ def pytest_ignore_collect(path, config):
return False
class FSHookProxy:
def pytest_collection_modifyitems(items, config):
deselect_prefixes = tuple(config.getoption("deselect") or [])
if not deselect_prefixes:
return
remaining = []
deselected = []
for colitem in items:
if colitem.nodeid.startswith(deselect_prefixes):
deselected.append(colitem)
else:
remaining.append(colitem)
if deselected:
config.hook.pytest_deselected(items=deselected)
items[:] = remaining
@contextlib.contextmanager
def _patched_find_module():
"""Patch bug in pkgutil.ImpImporter.find_module
When using pkgutil.find_loader on python<3.4 it removes symlinks
from the path due to a call to os.path.realpath. This is not consistent
with actually doing the import (in these versions, pkgutil and __import__
did not share the same underlying code). This can break conftest
discovery for pytest where symlinks are involved.
The only supported python<3.4 by pytest is python 2.7.
"""
if six.PY2: # python 3.4+ uses importlib instead
def find_module_patched(self, fullname, path=None):
# Note: we ignore 'path' argument since it is only used via meta_path
subname = fullname.split(".")[-1]
if subname != fullname and self.path is None:
return None
if self.path is None:
path = None
else:
# original: path = [os.path.realpath(self.path)]
path = [self.path]
try:
file, filename, etc = pkgutil.imp.find_module(subname,
path)
except ImportError:
return None
return pkgutil.ImpLoader(fullname, file, filename, etc)
old_find_module = pkgutil.ImpImporter.find_module
pkgutil.ImpImporter.find_module = find_module_patched
try:
yield
finally:
pkgutil.ImpImporter.find_module = old_find_module
else:
yield
class FSHookProxy(object):
def __init__(self, fspath, pm, remove_mods):
self.fspath = fspath
self.pm = pm
@ -201,373 +279,42 @@ class FSHookProxy:
self.__dict__[name] = x
return x
class _CompatProperty(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, owner):
if obj is None:
return self
# TODO: reenable in the features branch
# warnings.warn(
# "usage of {owner!r}.{name} is deprecated, please use pytest.{name} instead".format(
# name=self.name, owner=type(owner).__name__),
# PendingDeprecationWarning, stacklevel=2)
return getattr(__import__('pytest'), self.name)
class NodeKeywords(MappingMixin):
def __init__(self, node):
self.node = node
self.parent = node.parent
self._markers = {node.name: True}
def __getitem__(self, key):
try:
return self._markers[key]
except KeyError:
if self.parent is None:
raise
return self.parent.keywords[key]
def __setitem__(self, key, value):
self._markers[key] = value
def __delitem__(self, key):
raise ValueError("cannot delete key in keywords dict")
def __iter__(self):
seen = set(self._markers)
if self.parent is not None:
seen.update(self.parent.keywords)
return iter(seen)
def __len__(self):
return len(self.__iter__())
def keys(self):
return list(self)
def __repr__(self):
return "<NodeKeywords for node %s>" % (self.node, )
class Node(object):
""" base class for Collector and Item the test collection tree.
Collector subclasses have children, Items are terminal nodes."""
def __init__(self, name, parent=None, config=None, session=None):
#: a unique name within the scope of the parent node
self.name = name
#: the parent collector node.
self.parent = parent
#: the pytest config object
self.config = config or parent.config
#: the session this node is part of
self.session = session or parent.session
#: filesystem path where this node was collected from (can be None)
self.fspath = getattr(parent, 'fspath', None)
#: keywords/markers collected from all scopes
self.keywords = NodeKeywords(self)
#: allow adding of extra keywords to use for matching
self.extra_keyword_matches = set()
# used for storing artificial fixturedefs for direct parametrization
self._name2pseudofixturedef = {}
@property
def ihook(self):
""" fspath sensitive hook proxy used to call pytest hooks"""
return self.session.gethookproxy(self.fspath)
Module = _CompatProperty("Module")
Class = _CompatProperty("Class")
Instance = _CompatProperty("Instance")
Function = _CompatProperty("Function")
File = _CompatProperty("File")
Item = _CompatProperty("Item")
def _getcustomclass(self, name):
maybe_compatprop = getattr(type(self), name)
if isinstance(maybe_compatprop, _CompatProperty):
return getattr(__import__('pytest'), name)
else:
cls = getattr(self, name)
# TODO: reenable in the features branch
# warnings.warn("use of node.%s is deprecated, "
# "use pytest_pycollect_makeitem(...) to create custom "
# "collection nodes" % name, category=DeprecationWarning)
return cls
def __repr__(self):
return "<%s %r>" %(self.__class__.__name__,
getattr(self, 'name', None))
def warn(self, code, message):
""" generate a warning with the given code and message for this
item. """
assert isinstance(code, str)
fslocation = getattr(self, "location", None)
if fslocation is None:
fslocation = getattr(self, "fspath", None)
self.ihook.pytest_logwarning.call_historic(kwargs=dict(
code=code, message=message,
nodeid=self.nodeid, fslocation=fslocation))
# methods for ordering nodes
@property
def nodeid(self):
""" a ::-separated string denoting its collection tree address. """
try:
return self._nodeid
except AttributeError:
self._nodeid = x = self._makeid()
return x
def _makeid(self):
return self.parent.nodeid + "::" + self.name
def __hash__(self):
return hash(self.nodeid)
def setup(self):
pass
def teardown(self):
pass
def _memoizedcall(self, attrname, function):
exattrname = "_ex_" + attrname
failure = getattr(self, exattrname, None)
if failure is not None:
py.builtin._reraise(failure[0], failure[1], failure[2])
if hasattr(self, attrname):
return getattr(self, attrname)
try:
res = function()
except py.builtin._sysex:
raise
except:
failure = sys.exc_info()
setattr(self, exattrname, failure)
raise
setattr(self, attrname, res)
return res
def listchain(self):
""" return list of all parent collectors up to self,
starting from root of collection tree. """
chain = []
item = self
while item is not None:
chain.append(item)
item = item.parent
chain.reverse()
return chain
def add_marker(self, marker):
""" dynamically add a marker object to the node.
``marker`` can be a string or pytest.mark.* instance.
"""
from _pytest.mark import MarkDecorator, MARK_GEN
if isinstance(marker, py.builtin._basestring):
marker = getattr(MARK_GEN, marker)
elif not isinstance(marker, MarkDecorator):
raise ValueError("is not a string or pytest.mark.* Marker")
self.keywords[marker.name] = marker
def get_marker(self, name):
""" get a marker object from this node or None if
the node doesn't have a marker with that name. """
val = self.keywords.get(name, None)
if val is not None:
from _pytest.mark import MarkInfo, MarkDecorator
if isinstance(val, (MarkDecorator, MarkInfo)):
return val
def listextrakeywords(self):
""" Return a set of all extra keywords in self and any parents."""
extra_keywords = set()
item = self
for item in self.listchain():
extra_keywords.update(item.extra_keyword_matches)
return extra_keywords
def listnames(self):
return [x.name for x in self.listchain()]
def addfinalizer(self, fin):
""" register a function to be called when this node is finalized.
This method can only be called when this node is active
in a setup chain, for example during self.setup().
"""
self.session._setupstate.addfinalizer(fin, self)
def getparent(self, cls):
""" get the next parent node (including ourself)
which is an instance of the given class"""
current = self
while current and not isinstance(current, cls):
current = current.parent
return current
def _prunetraceback(self, excinfo):
pass
def _repr_failure_py(self, excinfo, style=None):
fm = self.session._fixturemanager
if excinfo.errisinstance(fm.FixtureLookupError):
return excinfo.value.formatrepr()
tbfilter = True
if self.config.option.fulltrace:
style="long"
else:
tb = _pytest._code.Traceback([excinfo.traceback[-1]])
self._prunetraceback(excinfo)
if len(excinfo.traceback) == 0:
excinfo.traceback = tb
tbfilter = False # prunetraceback already does it
if style == "auto":
style = "long"
# XXX should excinfo.getrepr record all data and toterminal() process it?
if style is None:
if self.config.option.tbstyle == "short":
style = "short"
else:
style = "long"
try:
os.getcwd()
abspath = False
except OSError:
abspath = True
return excinfo.getrepr(funcargs=True, abspath=abspath,
showlocals=self.config.option.showlocals,
style=style, tbfilter=tbfilter)
repr_failure = _repr_failure_py
class Collector(Node):
""" Collector instances create children through collect()
and thus iteratively build a tree.
"""
class CollectError(Exception):
""" an error during collection, contains a custom message. """
def collect(self):
""" returns a list of children (items and collectors)
for this collection node.
"""
raise NotImplementedError("abstract")
def repr_failure(self, excinfo):
""" represent a collection failure. """
if excinfo.errisinstance(self.CollectError):
exc = excinfo.value
return str(exc.args[0])
return self._repr_failure_py(excinfo, style="short")
def _prunetraceback(self, excinfo):
if hasattr(self, 'fspath'):
traceback = excinfo.traceback
ntraceback = traceback.cut(path=self.fspath)
if ntraceback == traceback:
ntraceback = ntraceback.cut(excludepath=tracebackcutdir)
excinfo.traceback = ntraceback.filter()
class FSCollector(Collector):
def __init__(self, fspath, parent=None, config=None, session=None):
fspath = py.path.local(fspath) # xxx only for test_resultlog.py?
name = fspath.basename
if parent is not None:
rel = fspath.relto(parent.fspath)
if rel:
name = rel
name = name.replace(os.sep, "/")
super(FSCollector, self).__init__(name, parent, config, session)
self.fspath = fspath
def _makeid(self):
relpath = self.fspath.relto(self.config.rootdir)
if os.sep != "/":
relpath = relpath.replace(os.sep, "/")
return relpath
class File(FSCollector):
""" base class for collecting tests from a file. """
class Item(Node):
""" a basic test invocation item. Note that for a single function
there might be multiple test invocation items.
"""
nextitem = None
def __init__(self, name, parent=None, config=None, session=None):
super(Item, self).__init__(name, parent, config, session)
self._report_sections = []
def add_report_section(self, when, key, content):
if content:
self._report_sections.append((when, key, content))
def reportinfo(self):
return self.fspath, None, ""
@property
def location(self):
try:
return self._location
except AttributeError:
location = self.reportinfo()
# bestrelpath is a quite slow function
cache = self.config.__dict__.setdefault("_bestrelpathcache", {})
try:
fspath = cache[location[0]]
except KeyError:
fspath = self.session.fspath.bestrelpath(location[0])
cache[location[0]] = fspath
location = (fspath, location[1], str(location[2]))
self._location = location
return location
class NoMatch(Exception):
""" raised if matching cannot locate a matching names. """
class Interrupted(KeyboardInterrupt):
""" signals an interrupted test run. """
__module__ = 'builtins' # for py3
__module__ = 'builtins' # for py3
class Session(FSCollector):
class Failed(Exception):
""" signals an stop as failed test run. """
class Session(nodes.FSCollector):
Interrupted = Interrupted
Failed = Failed
def __init__(self, config):
FSCollector.__init__(self, config.rootdir, parent=None,
config=config, session=self)
nodes.FSCollector.__init__(
self, config.rootdir, parent=None,
config=config, session=self, nodeid="")
self.testsfailed = 0
self.testscollected = 0
self.shouldstop = False
self.shouldfail = False
self.trace = config.trace.root.get("collection")
self._norecursepatterns = config.getini("norecursedirs")
self.startdir = py.path.local()
self.config.pluginmanager.register(self, name="session")
def _makeid(self):
return ""
self.config.pluginmanager.register(self, name="session")
@hookimpl(tryfirst=True)
def pytest_collectstart(self):
if self.shouldfail:
raise self.Failed(self.shouldfail)
if self.shouldstop:
raise self.Interrupted(self.shouldstop)
@ -577,7 +324,7 @@ class Session(FSCollector):
self.testsfailed += 1
maxfail = self.config.getvalue("maxfail")
if maxfail and self.testsfailed >= maxfail:
self.shouldstop = "stopping after %d failures" % (
self.shouldfail = "stopping after %d failures" % (
self.testsfailed)
pytest_collectreport = pytest_runtest_logreport
@ -604,7 +351,7 @@ class Session(FSCollector):
items = self._perform_collect(args, genitems)
self.config.pluginmanager.check_pending()
hook.pytest_collection_modifyitems(session=self,
config=self.config, items=items)
config=self.config, items=items)
finally:
hook.pytest_collection_finish(session=self)
self.testscollected = len(items)
@ -692,9 +439,10 @@ class Session(FSCollector):
"""Convert a dotted module name to path.
"""
import pkgutil
try:
loader = pkgutil.find_loader(x)
with _patched_find_module():
loader = pkgutil.find_loader(x)
except ImportError:
return x
if loader is None:
@ -702,7 +450,8 @@ class Session(FSCollector):
# This method is sometimes invoked when AssertionRewritingHook, which
# does not define a get_filename method, is already in place:
try:
path = loader.get_filename(x)
with _patched_find_module():
path = loader.get_filename(x)
except AttributeError:
# Retrieve path from AssertionRewritingHook:
path = loader.modules[x][0].co_filename
@ -746,11 +495,11 @@ class Session(FSCollector):
nextnames = names[1:]
resultnodes = []
for node in matching:
if isinstance(node, Item):
if isinstance(node, nodes.Item):
if not names:
resultnodes.append(node)
continue
assert isinstance(node, Collector)
assert isinstance(node, nodes.Collector)
rep = collect_one_node(node)
if rep.passed:
has_matched = False
@ -772,11 +521,11 @@ class Session(FSCollector):
def genitems(self, node):
self.trace("genitems", node)
if isinstance(node, Item):
if isinstance(node, nodes.Item):
node.ihook.pytest_itemcollected(item=node)
yield node
else:
assert isinstance(node, Collector)
assert isinstance(node, nodes.Collector)
rep = collect_one_node(node)
if rep.passed:
for subnode in rep.result:

157
_pytest/mark/__init__.py Normal file
View File

@ -0,0 +1,157 @@
""" generic mechanism for marking and selecting python functions. """
from __future__ import absolute_import, division, print_function
from _pytest.config import UsageError
from .structures import (
ParameterSet, EMPTY_PARAMETERSET_OPTION, MARK_GEN,
Mark, MarkInfo, MarkDecorator, MarkGenerator,
transfer_markers, get_empty_parameterset_mark
)
from .legacy import matchkeyword, matchmark
__all__ = [
'Mark', 'MarkInfo', 'MarkDecorator', 'MarkGenerator',
'transfer_markers', 'get_empty_parameterset_mark'
]
class MarkerError(Exception):
"""Error in use of a pytest marker/attribute."""
def param(*values, **kw):
"""Specify a parameter in `pytest.mark.parametrize`_ calls or
:ref:`parametrized fixtures <fixture-parametrize-marks>`.
.. code-block:: python
@pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
pytest.param("6*9", 42, marks=pytest.mark.xfail),
])
def test_eval(test_input, expected):
assert eval(test_input) == expected
:param values: variable args of the values of the parameter set, in order.
:keyword marks: a single mark or a list of marks to be applied to this parameter set.
:keyword str id: the id to attribute to this parameter set.
"""
return ParameterSet.param(*values, **kw)
def pytest_addoption(parser):
group = parser.getgroup("general")
group._addoption(
'-k',
action="store", dest="keyword", default='', metavar="EXPRESSION",
help="only run tests which match the given substring expression. "
"An expression is a python evaluatable expression "
"where all names are substring-matched against test names "
"and their parent classes. Example: -k 'test_method or test_"
"other' matches all test functions and classes whose name "
"contains 'test_method' or 'test_other', while -k 'not test_method' "
"matches those that don't contain 'test_method' in their names. "
"Additionally keywords are matched to classes and functions "
"containing extra names in their 'extra_keyword_matches' set, "
"as well as functions which have names assigned directly to them."
)
group._addoption(
"-m",
action="store", dest="markexpr", default="", metavar="MARKEXPR",
help="only run tests matching given mark expression. "
"example: -m 'mark1 and not mark2'."
)
group.addoption(
"--markers", action="store_true",
help="show markers (builtin, plugin and per-project ones)."
)
parser.addini("markers", "markers for test functions", 'linelist')
parser.addini(
EMPTY_PARAMETERSET_OPTION,
"default marker for empty parametersets")
def pytest_cmdline_main(config):
import _pytest.config
if config.option.markers:
config._do_configure()
tw = _pytest.config.create_terminal_writer(config)
for line in config.getini("markers"):
parts = line.split(":", 1)
name = parts[0]
rest = parts[1] if len(parts) == 2 else ''
tw.write("@pytest.mark.%s:" % name, bold=True)
tw.line(rest)
tw.line()
config._ensure_unconfigure()
return 0
pytest_cmdline_main.tryfirst = True
def deselect_by_keyword(items, config):
keywordexpr = config.option.keyword.lstrip()
if keywordexpr.startswith("-"):
keywordexpr = "not " + keywordexpr[1:]
selectuntil = False
if keywordexpr[-1:] == ":":
selectuntil = True
keywordexpr = keywordexpr[:-1]
remaining = []
deselected = []
for colitem in items:
if keywordexpr and not matchkeyword(colitem, keywordexpr):
deselected.append(colitem)
else:
if selectuntil:
keywordexpr = None
remaining.append(colitem)
if deselected:
config.hook.pytest_deselected(items=deselected)
items[:] = remaining
def deselect_by_mark(items, config):
matchexpr = config.option.markexpr
if not matchexpr:
return
remaining = []
deselected = []
for item in items:
if matchmark(item, matchexpr):
remaining.append(item)
else:
deselected.append(item)
if deselected:
config.hook.pytest_deselected(items=deselected)
items[:] = remaining
def pytest_collection_modifyitems(items, config):
deselect_by_keyword(items, config)
deselect_by_mark(items, config)
def pytest_configure(config):
config._old_mark_config = MARK_GEN._config
if config.option.strict:
MARK_GEN._config = config
empty_parameterset = config.getini(EMPTY_PARAMETERSET_OPTION)
if empty_parameterset not in ('skip', 'xfail', None, ''):
raise UsageError(
"{!s} must be one of skip and xfail,"
" but it is {!r}".format(EMPTY_PARAMETERSET_OPTION, empty_parameterset))
def pytest_unconfigure(config):
MARK_GEN._config = getattr(config, '_old_mark_config', None)

118
_pytest/mark/evaluate.py Normal file
View File

@ -0,0 +1,118 @@
import os
import six
import sys
import platform
import traceback
from ..outcomes import fail, TEST_OUTCOME
def cached_eval(config, expr, d):
if not hasattr(config, '_evalcache'):
config._evalcache = {}
try:
return config._evalcache[expr]
except KeyError:
import _pytest._code
exprcode = _pytest._code.compile(expr, mode="eval")
config._evalcache[expr] = x = eval(exprcode, d)
return x
class MarkEvaluator(object):
def __init__(self, item, name):
self.item = item
self._marks = None
self._mark = None
self._mark_name = name
def __bool__(self):
# dont cache here to prevent staleness
return bool(self._get_marks())
__nonzero__ = __bool__
def wasvalid(self):
return not hasattr(self, 'exc')
def _get_marks(self):
return [x for x in self.item.iter_markers() if x.name == self._mark_name]
def invalidraise(self, exc):
raises = self.get('raises')
if not raises:
return
return not isinstance(exc, raises)
def istrue(self):
try:
return self._istrue()
except TEST_OUTCOME:
self.exc = sys.exc_info()
if isinstance(self.exc[1], SyntaxError):
msg = [" " * (self.exc[1].offset + 4) + "^", ]
msg.append("SyntaxError: invalid syntax")
else:
msg = traceback.format_exception_only(*self.exc[:2])
fail("Error evaluating %r expression\n"
" %s\n"
"%s"
% (self._mark_name, self.expr, "\n".join(msg)),
pytrace=False)
def _getglobals(self):
d = {'os': os, 'sys': sys, 'platform': platform, 'config': self.item.config}
if hasattr(self.item, 'obj'):
d.update(self.item.obj.__globals__)
return d
def _istrue(self):
if hasattr(self, 'result'):
return self.result
self._marks = self._get_marks()
if self._marks:
self.result = False
for mark in self._marks:
self._mark = mark
if 'condition' in mark.kwargs:
args = (mark.kwargs['condition'],)
else:
args = mark.args
for expr in args:
self.expr = expr
if isinstance(expr, six.string_types):
d = self._getglobals()
result = cached_eval(self.item.config, expr, d)
else:
if "reason" not in mark.kwargs:
# XXX better be checked at collection time
msg = "you need to specify reason=STRING " \
"when using booleans as conditions."
fail(msg)
result = bool(expr)
if result:
self.result = True
self.reason = mark.kwargs.get('reason', None)
self.expr = expr
return self.result
if not args:
self.result = True
self.reason = mark.kwargs.get('reason', None)
return self.result
return False
def get(self, attr, default=None):
if self._mark is None:
return default
return self._mark.kwargs.get(attr, default)
def getexplanation(self):
expl = getattr(self, 'reason', None) or self.get('reason', None)
if not expl:
if not hasattr(self, 'expr'):
return ""
else:
return "condition: " + str(self.expr)
return expl

97
_pytest/mark/legacy.py Normal file
View File

@ -0,0 +1,97 @@
"""
this is a place where we put datastructures used by legacy apis
we hope ot remove
"""
import attr
import keyword
from . import MarkInfo, MarkDecorator
from _pytest.config import UsageError
@attr.s
class MarkMapping(object):
"""Provides a local mapping for markers where item access
resolves to True if the marker is present. """
own_mark_names = attr.ib()
@classmethod
def from_keywords(cls, keywords):
mark_names = set()
for key, value in keywords.items():
if isinstance(value, MarkInfo) or isinstance(value, MarkDecorator):
mark_names.add(key)
return cls(mark_names)
def __getitem__(self, name):
return name in self.own_mark_names
class KeywordMapping(object):
"""Provides a local mapping for keywords.
Given a list of names, map any substring of one of these names to True.
"""
def __init__(self, names):
self._names = names
@classmethod
def from_item(cls, item):
mapped_names = set()
# Add the names of the current item and any parent items
import pytest
for item in item.listchain():
if not isinstance(item, pytest.Instance):
mapped_names.add(item.name)
# Add the names added as extra keywords to current or parent items
for name in item.listextrakeywords():
mapped_names.add(name)
# Add the names attached to the current function through direct assignment
if hasattr(item, 'function'):
for name in item.function.__dict__:
mapped_names.add(name)
return cls(mapped_names)
def __getitem__(self, subname):
for name in self._names:
if subname in name:
return True
return False
python_keywords_allowed_list = ["or", "and", "not"]
def matchmark(colitem, markexpr):
"""Tries to match on any marker names, attached to the given colitem."""
return eval(markexpr, {}, MarkMapping.from_keywords(colitem.keywords))
def matchkeyword(colitem, keywordexpr):
"""Tries to match given keyword expression to given collector item.
Will match on the name of colitem, including the names of its parents.
Only matches names of items which are either a :class:`Class` or a
:class:`Function`.
Additionally, matches on names in the 'extra_keyword_matches' set of
any item, as well as names directly assigned to test functions.
"""
mapping = KeywordMapping.from_item(colitem)
if " " not in keywordexpr:
# special case to allow for simple "-k pass" and "-k 1.3"
return mapping[keywordexpr]
elif keywordexpr.startswith("not ") and " " not in keywordexpr[4:]:
return not mapping[keywordexpr[4:]]
for kwd in keywordexpr.split():
if keyword.iskeyword(kwd) and kwd not in python_keywords_allowed_list:
raise UsageError("Python keyword '{}' not accepted in expressions passed to '-k'".format(kwd))
try:
return eval(keywordexpr, {}, mapping)
except SyntaxError:
raise UsageError("Wrong expression passed to '-k': {}".format(keywordexpr))

View File

@ -1,12 +1,17 @@
""" generic mechanism for marking and selecting python functions. """
from __future__ import absolute_import, division, print_function
import inspect
import warnings
from collections import namedtuple
from operator import attrgetter
from .compat import imap
from .deprecated import MARK_INFO_ATTRIBUTE, MARK_PARAMETERSET_UNPACKING
import attr
from ..deprecated import MARK_PARAMETERSET_UNPACKING, MARK_INFO_ATTRIBUTE
from ..compat import NOTSET, getfslineno, MappingMixin
from six.moves import map, reduce
EMPTY_PARAMETERSET_OPTION = "empty_parameter_set_mark"
def alias(name, warning=None):
getter = attrgetter(name)
@ -18,6 +23,25 @@ def alias(name, warning=None):
return property(getter if warning is None else warned, doc='alias for ' + name)
def istestfunc(func):
return hasattr(func, "__call__") and \
getattr(func, "__name__", "<lambda>") != "<lambda>"
def get_empty_parameterset_mark(config, argnames, func):
requested_mark = config.getini(EMPTY_PARAMETERSET_OPTION)
if requested_mark in ('', None, 'skip'):
mark = MARK_GEN.skip
elif requested_mark == 'xfail':
mark = MARK_GEN.xfail(run=False)
else:
raise LookupError(requested_mark)
fs, lineno = getfslineno(func)
reason = "got empty parameter set %r, function %s at %s:%d" % (
argnames, func.__name__, fs, lineno)
return mark(reason=reason)
class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
@classmethod
def param(cls, *values, **kw):
@ -30,8 +54,8 @@ class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
def param_extract_id(id=None):
return id
id = param_extract_id(**kw)
return cls(values, marks, id)
id_ = param_extract_id(**kw)
return cls(values, marks, id_)
@classmethod
def extract_from(cls, parameterset, legacy_force_tuple=False):
@ -66,221 +90,53 @@ class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
return cls(argval, marks=newmarks, id=None)
@property
def deprecated_arg_dict(self):
return dict((mark.name, mark) for mark in self.marks)
class MarkerError(Exception):
"""Error in use of a pytest marker/attribute."""
def param(*values, **kw):
return ParameterSet.param(*values, **kw)
def pytest_addoption(parser):
group = parser.getgroup("general")
group._addoption(
'-k',
action="store", dest="keyword", default='', metavar="EXPRESSION",
help="only run tests which match the given substring expression. "
"An expression is a python evaluatable expression "
"where all names are substring-matched against test names "
"and their parent classes. Example: -k 'test_method or test_"
"other' matches all test functions and classes whose name "
"contains 'test_method' or 'test_other'. "
"Additionally keywords are matched to classes and functions "
"containing extra names in their 'extra_keyword_matches' set, "
"as well as functions which have names assigned directly to them."
)
group._addoption(
"-m",
action="store", dest="markexpr", default="", metavar="MARKEXPR",
help="only run tests matching given mark expression. "
"example: -m 'mark1 and not mark2'."
)
group.addoption(
"--markers", action="store_true",
help="show markers (builtin, plugin and per-project ones)."
)
parser.addini("markers", "markers for test functions", 'linelist')
def pytest_cmdline_main(config):
import _pytest.config
if config.option.markers:
config._do_configure()
tw = _pytest.config.create_terminal_writer(config)
for line in config.getini("markers"):
name, rest = line.split(":", 1)
tw.write("@pytest.mark.%s:" % name, bold=True)
tw.line(rest)
tw.line()
config._ensure_unconfigure()
return 0
pytest_cmdline_main.tryfirst = True
def pytest_collection_modifyitems(items, config):
keywordexpr = config.option.keyword.lstrip()
matchexpr = config.option.markexpr
if not keywordexpr and not matchexpr:
return
# pytest used to allow "-" for negating
# but today we just allow "-" at the beginning, use "not" instead
# we probably remove "-" altogether soon
if keywordexpr.startswith("-"):
keywordexpr = "not " + keywordexpr[1:]
selectuntil = False
if keywordexpr[-1:] == ":":
selectuntil = True
keywordexpr = keywordexpr[:-1]
remaining = []
deselected = []
for colitem in items:
if keywordexpr and not matchkeyword(colitem, keywordexpr):
deselected.append(colitem)
@classmethod
def _for_parametrize(cls, argnames, argvalues, func, config):
if not isinstance(argnames, (tuple, list)):
argnames = [x.strip() for x in argnames.split(",") if x.strip()]
force_tuple = len(argnames) == 1
else:
if selectuntil:
keywordexpr = None
if matchexpr:
if not matchmark(colitem, matchexpr):
deselected.append(colitem)
continue
remaining.append(colitem)
force_tuple = False
parameters = [
ParameterSet.extract_from(x, legacy_force_tuple=force_tuple)
for x in argvalues]
del argvalues
if deselected:
config.hook.pytest_deselected(items=deselected)
items[:] = remaining
if not parameters:
mark = get_empty_parameterset_mark(config, argnames, func)
parameters.append(ParameterSet(
values=(NOTSET,) * len(argnames),
marks=[mark],
id=None,
))
return argnames, parameters
class MarkMapping:
"""Provides a local mapping for markers where item access
resolves to True if the marker is present. """
def __init__(self, keywords):
mymarks = set()
for key, value in keywords.items():
if isinstance(value, MarkInfo) or isinstance(value, MarkDecorator):
mymarks.add(key)
self._mymarks = mymarks
@attr.s(frozen=True)
class Mark(object):
#: name of the mark
name = attr.ib(type=str)
#: positional arguments of the mark decorator
args = attr.ib(type="List[object]")
#: keyword arguments of the mark decorator
kwargs = attr.ib(type="Dict[str, object]")
def __getitem__(self, name):
return name in self._mymarks
def combined_with(self, other):
"""
:param other: the mark to combine with
:type other: Mark
:rtype: Mark
combines by appending aargs and merging the mappings
"""
assert self.name == other.name
return Mark(
self.name, self.args + other.args,
dict(self.kwargs, **other.kwargs))
class KeywordMapping:
"""Provides a local mapping for keywords.
Given a list of names, map any substring of one of these names to True.
"""
def __init__(self, names):
self._names = names
def __getitem__(self, subname):
for name in self._names:
if subname in name:
return True
return False
def matchmark(colitem, markexpr):
"""Tries to match on any marker names, attached to the given colitem."""
return eval(markexpr, {}, MarkMapping(colitem.keywords))
def matchkeyword(colitem, keywordexpr):
"""Tries to match given keyword expression to given collector item.
Will match on the name of colitem, including the names of its parents.
Only matches names of items which are either a :class:`Class` or a
:class:`Function`.
Additionally, matches on names in the 'extra_keyword_matches' set of
any item, as well as names directly assigned to test functions.
"""
mapped_names = set()
# Add the names of the current item and any parent items
import pytest
for item in colitem.listchain():
if not isinstance(item, pytest.Instance):
mapped_names.add(item.name)
# Add the names added as extra keywords to current or parent items
for name in colitem.listextrakeywords():
mapped_names.add(name)
# Add the names attached to the current function through direct assignment
if hasattr(colitem, 'function'):
for name in colitem.function.__dict__:
mapped_names.add(name)
mapping = KeywordMapping(mapped_names)
if " " not in keywordexpr:
# special case to allow for simple "-k pass" and "-k 1.3"
return mapping[keywordexpr]
elif keywordexpr.startswith("not ") and " " not in keywordexpr[4:]:
return not mapping[keywordexpr[4:]]
return eval(keywordexpr, {}, mapping)
def pytest_configure(config):
config._old_mark_config = MARK_GEN._config
if config.option.strict:
MARK_GEN._config = config
def pytest_unconfigure(config):
MARK_GEN._config = getattr(config, '_old_mark_config', None)
class MarkGenerator:
""" Factory for :class:`MarkDecorator` objects - exposed as
a ``pytest.mark`` singleton instance. Example::
import pytest
@pytest.mark.slowtest
def test_function():
pass
will set a 'slowtest' :class:`MarkInfo` object
on the ``test_function`` object. """
_config = None
def __getattr__(self, name):
if name[0] == "_":
raise AttributeError("Marker name must NOT start with underscore")
if self._config is not None:
self._check(name)
return MarkDecorator(Mark(name, (), {}))
def _check(self, name):
try:
if name in self._markers:
return
except AttributeError:
pass
self._markers = l = set()
for line in self._config.getini("markers"):
beginning = line.split(":", 1)
x = beginning[0].split("(", 1)[0]
l.add(x)
if name not in self._markers:
raise AttributeError("%r not a registered marker" % (name,))
def istestfunc(func):
return hasattr(func, "__call__") and \
getattr(func, "__name__", "<lambda>") != "<lambda>"
class MarkDecorator:
@attr.s
class MarkDecorator(object):
""" A decorator for test functions and test classes. When applied
it will create :class:`MarkInfo` objects which may be
:ref:`retrieved by hooks as item keywords <excontrolskip>`.
@ -313,9 +169,8 @@ class MarkDecorator:
additional keyword or positional arguments.
"""
def __init__(self, mark):
assert isinstance(mark, Mark), repr(mark)
self.mark = mark
mark = attr.ib(validator=attr.validators.instance_of(Mark))
name = alias('mark.name')
args = alias('mark.args')
@ -323,14 +178,25 @@ class MarkDecorator:
@property
def markname(self):
return self.name # for backward-compat (2.4.1 had this attr)
return self.name # for backward-compat (2.4.1 had this attr)
def __eq__(self, other):
return self.mark == other.mark
return self.mark == other.mark if isinstance(other, MarkDecorator) else False
def __repr__(self):
return "<MarkDecorator %r>" % (self.mark,)
def with_args(self, *args, **kwargs):
""" return a MarkDecorator with extra arguments added
unlike call this can be used even if the sole argument is a callable/class
:return: MarkDecorator
"""
mark = Mark(self.name, args, kwargs)
return self.__class__(self.mark.combined_with(mark))
def __call__(self, *args, **kwargs):
""" if passed a single callable argument: decorate it with mark info.
otherwise add *args/**kwargs in-place to mark information. """
@ -344,9 +210,8 @@ class MarkDecorator:
store_legacy_markinfo(func, self.mark)
store_mark(func, self.mark)
return func
return self.with_args(*args, **kwargs)
mark = Mark(self.name, args, kwargs)
return self.__class__(self.mark.combined_with(mark))
def get_unpacked_marks(obj):
"""
@ -368,7 +233,7 @@ def store_mark(obj, mark):
"""
assert isinstance(mark, Mark), mark
# always reassign name to avoid updating pytestmark
# in a referene that was only borrowed
# in a reference that was only borrowed
obj.pytestmark = get_unpacked_marks(obj) + [mark]
@ -379,60 +244,12 @@ def store_legacy_markinfo(func, mark):
raise TypeError("got {mark!r} instead of a Mark".format(mark=mark))
holder = getattr(func, mark.name, None)
if holder is None:
holder = MarkInfo(mark)
holder = MarkInfo.for_mark(mark)
setattr(func, mark.name, holder)
else:
holder.add_mark(mark)
class Mark(namedtuple('Mark', 'name, args, kwargs')):
def combined_with(self, other):
assert self.name == other.name
return Mark(
self.name, self.args + other.args,
dict(self.kwargs, **other.kwargs))
class MarkInfo(object):
""" Marking object created by :class:`MarkDecorator` instances. """
def __init__(self, mark):
assert isinstance(mark, Mark), repr(mark)
self.combined = mark
self._marks = [mark]
name = alias('combined.name', warning=MARK_INFO_ATTRIBUTE)
args = alias('combined.args', warning=MARK_INFO_ATTRIBUTE)
kwargs = alias('combined.kwargs', warning=MARK_INFO_ATTRIBUTE)
def __repr__(self):
return "<MarkInfo {0!r}>".format(self.combined)
def add_mark(self, mark):
""" add a MarkInfo with the given args and kwargs. """
self._marks.append(mark)
self.combined = self.combined.combined_with(mark)
def __iter__(self):
""" yield MarkInfo objects each relating to a marking-call. """
return imap(MarkInfo, self._marks)
MARK_GEN = MarkGenerator()
def _marked(func, mark):
""" Returns True if :func: is already marked with :mark:, False otherwise.
This can happen if marker is applied to class and the test file is
invoked more than once.
"""
try:
func_mark = getattr(func, mark.name)
except AttributeError:
return False
return mark.args == func_mark.args and mark.kwargs == func_mark.kwargs
def transfer_markers(funcobj, cls, mod):
"""
this function transfers class level markers and module level markers
@ -446,3 +263,152 @@ def transfer_markers(funcobj, cls, mod):
for mark in get_unpacked_marks(obj):
if not _marked(funcobj, mark):
store_legacy_markinfo(funcobj, mark)
def _marked(func, mark):
""" Returns True if :func: is already marked with :mark:, False otherwise.
This can happen if marker is applied to class and the test file is
invoked more than once.
"""
try:
func_mark = getattr(func, getattr(mark, 'combined', mark).name)
except AttributeError:
return False
return any(mark == info.combined for info in func_mark)
@attr.s
class MarkInfo(object):
""" Marking object created by :class:`MarkDecorator` instances. """
_marks = attr.ib()
combined = attr.ib(
repr=False,
default=attr.Factory(lambda self: reduce(Mark.combined_with, self._marks),
takes_self=True))
name = alias('combined.name', warning=MARK_INFO_ATTRIBUTE)
args = alias('combined.args', warning=MARK_INFO_ATTRIBUTE)
kwargs = alias('combined.kwargs', warning=MARK_INFO_ATTRIBUTE)
@classmethod
def for_mark(cls, mark):
return cls([mark])
def __repr__(self):
return "<MarkInfo {0!r}>".format(self.combined)
def add_mark(self, mark):
""" add a MarkInfo with the given args and kwargs. """
self._marks.append(mark)
self.combined = self.combined.combined_with(mark)
def __iter__(self):
""" yield MarkInfo objects each relating to a marking-call. """
return map(MarkInfo.for_mark, self._marks)
class MarkGenerator(object):
""" Factory for :class:`MarkDecorator` objects - exposed as
a ``pytest.mark`` singleton instance. Example::
import pytest
@pytest.mark.slowtest
def test_function():
pass
will set a 'slowtest' :class:`MarkInfo` object
on the ``test_function`` object. """
_config = None
def __getattr__(self, name):
if name[0] == "_":
raise AttributeError("Marker name must NOT start with underscore")
if self._config is not None:
self._check(name)
return MarkDecorator(Mark(name, (), {}))
def _check(self, name):
try:
if name in self._markers:
return
except AttributeError:
pass
self._markers = values = set()
for line in self._config.getini("markers"):
marker = line.split(":", 1)[0]
marker = marker.rstrip()
x = marker.split("(", 1)[0]
values.add(x)
if name not in self._markers:
raise AttributeError("%r not a registered marker" % (name,))
MARK_GEN = MarkGenerator()
class NodeKeywords(MappingMixin):
def __init__(self, node):
self.node = node
self.parent = node.parent
self._markers = {node.name: True}
def __getitem__(self, key):
try:
return self._markers[key]
except KeyError:
if self.parent is None:
raise
return self.parent.keywords[key]
def __setitem__(self, key, value):
self._markers[key] = value
def __delitem__(self, key):
raise ValueError("cannot delete key in keywords dict")
def __iter__(self):
seen = self._seen()
return iter(seen)
def _seen(self):
seen = set(self._markers)
if self.parent is not None:
seen.update(self.parent.keywords)
return seen
def __len__(self):
return len(self._seen())
def __repr__(self):
return "<NodeKeywords for node %s>" % (self.node, )
@attr.s(cmp=False, hash=False)
class NodeMarkers(object):
"""
internal strucutre for storing marks belongong to a node
..warning::
unstable api
"""
own_markers = attr.ib(default=attr.Factory(list))
def update(self, add_markers):
"""update the own markers
"""
self.own_markers.extend(add_markers)
def find(self, name):
"""
find markers in own nodes or parent nodes
needs a better place
"""
for mark in self.own_markers:
if mark.name == name:
yield mark
def __iter__(self):
return iter(self.own_markers)

View File

@ -4,8 +4,9 @@ from __future__ import absolute_import, division, print_function
import os
import sys
import re
from contextlib import contextmanager
from py.builtin import _basestring
import six
from _pytest.fixtures import fixture
RE_IMPORT_ERROR_NAME = re.compile("^No module named (.*)$")
@ -71,15 +72,15 @@ def annotated_getattr(obj, name, ann):
obj = getattr(obj, name)
except AttributeError:
raise AttributeError(
'%r object at %s has no attribute %r' % (
type(obj).__name__, ann, name
)
'%r object at %s has no attribute %r' % (
type(obj).__name__, ann, name
)
)
return obj
def derive_importpath(import_path, raising):
if not isinstance(import_path, _basestring) or "." not in import_path:
if not isinstance(import_path, six.string_types) or "." not in import_path:
raise TypeError("must be absolute import path string, not %r" %
(import_path,))
module, attr = import_path.rsplit('.', 1)
@ -89,7 +90,7 @@ def derive_importpath(import_path, raising):
return attr, target
class Notset:
class Notset(object):
def __repr__(self):
return "<notset>"
@ -97,7 +98,7 @@ class Notset:
notset = Notset()
class MonkeyPatch:
class MonkeyPatch(object):
""" Object returned by the ``monkeypatch`` fixture keeping a record of setattr/item/env/syspath changes.
"""
@ -107,6 +108,29 @@ class MonkeyPatch:
self._cwd = None
self._savesyspath = None
@contextmanager
def context(self):
"""
Context manager that returns a new :class:`MonkeyPatch` object which
undoes any patching done inside the ``with`` block upon exit:
.. code-block:: python
import functools
def test_partial(monkeypatch):
with monkeypatch.context() as m:
m.setattr(functools, "partial", 3)
Useful in situations where it is desired to undo some patches before the test ends,
such as mocking ``stdlib`` functions that might break pytest itself if mocked (for examples
of this see `#3290 <https://github.com/pytest-dev/pytest/issues/3290>`_.
"""
m = MonkeyPatch()
try:
yield m
finally:
m.undo()
def setattr(self, target, name, value=notset, raising=True):
""" Set attribute value on target, memorizing the old value.
By default raise AttributeError if the attribute did not exist.
@ -114,7 +138,7 @@ class MonkeyPatch:
For convenience you can specify a string as ``target`` which
will be interpreted as a dotted import path, with the last part
being the attribute name. Example:
``monkeypatch.setattr("os.getcwd", lambda x: "/")``
``monkeypatch.setattr("os.getcwd", lambda: "/")``
would set the ``getcwd`` function of the ``os`` module.
The ``raising`` value determines if the setattr should fail
@ -125,7 +149,7 @@ class MonkeyPatch:
import inspect
if value is notset:
if not isinstance(target, _basestring):
if not isinstance(target, six.string_types):
raise TypeError("use setattr(target, name, value) or "
"setattr(target, value) with target being a dotted "
"import string")
@ -155,7 +179,7 @@ class MonkeyPatch:
"""
__tracebackhide__ = True
if name is notset:
if not isinstance(target, _basestring):
if not isinstance(target, six.string_types):
raise TypeError("use delattr(target, name) or "
"delattr(target) with target being a dotted "
"import string")

392
_pytest/nodes.py Normal file
View File

@ -0,0 +1,392 @@
from __future__ import absolute_import, division, print_function
import os
import six
import py
import attr
import _pytest
import _pytest._code
from _pytest.mark.structures import NodeKeywords, MarkInfo
SEP = "/"
tracebackcutdir = py.path.local(_pytest.__file__).dirpath()
def _splitnode(nodeid):
"""Split a nodeid into constituent 'parts'.
Node IDs are strings, and can be things like:
''
'testing/code'
'testing/code/test_excinfo.py'
'testing/code/test_excinfo.py::TestFormattedExcinfo::()'
Return values are lists e.g.
[]
['testing', 'code']
['testing', 'code', 'test_excinfo.py']
['testing', 'code', 'test_excinfo.py', 'TestFormattedExcinfo', '()']
"""
if nodeid == '':
# If there is no root node at all, return an empty list so the caller's logic can remain sane
return []
parts = nodeid.split(SEP)
# Replace single last element 'test_foo.py::Bar::()' with multiple elements 'test_foo.py', 'Bar', '()'
parts[-1:] = parts[-1].split("::")
return parts
def ischildnode(baseid, nodeid):
"""Return True if the nodeid is a child node of the baseid.
E.g. 'foo/bar::Baz::()' is a child of 'foo', 'foo/bar' and 'foo/bar::Baz', but not of 'foo/blorp'
"""
base_parts = _splitnode(baseid)
node_parts = _splitnode(nodeid)
if len(node_parts) < len(base_parts):
return False
return node_parts[:len(base_parts)] == base_parts
@attr.s
class _CompatProperty(object):
name = attr.ib()
def __get__(self, obj, owner):
if obj is None:
return self
# TODO: reenable in the features branch
# warnings.warn(
# "usage of {owner!r}.{name} is deprecated, please use pytest.{name} instead".format(
# name=self.name, owner=type(owner).__name__),
# PendingDeprecationWarning, stacklevel=2)
return getattr(__import__('pytest'), self.name)
class Node(object):
""" base class for Collector and Item the test collection tree.
Collector subclasses have children, Items are terminal nodes."""
def __init__(self, name, parent=None, config=None, session=None, fspath=None, nodeid=None):
#: a unique name within the scope of the parent node
self.name = name
#: the parent collector node.
self.parent = parent
#: the pytest config object
self.config = config or parent.config
#: the session this node is part of
self.session = session or parent.session
#: filesystem path where this node was collected from (can be None)
self.fspath = fspath or getattr(parent, 'fspath', None)
#: keywords/markers collected from all scopes
self.keywords = NodeKeywords(self)
#: the marker objects belonging to this node
self.own_markers = []
#: allow adding of extra keywords to use for matching
self.extra_keyword_matches = set()
# used for storing artificial fixturedefs for direct parametrization
self._name2pseudofixturedef = {}
if nodeid is not None:
self._nodeid = nodeid
else:
assert parent is not None
self._nodeid = self.parent.nodeid + "::" + self.name
@property
def ihook(self):
""" fspath sensitive hook proxy used to call pytest hooks"""
return self.session.gethookproxy(self.fspath)
Module = _CompatProperty("Module")
Class = _CompatProperty("Class")
Instance = _CompatProperty("Instance")
Function = _CompatProperty("Function")
File = _CompatProperty("File")
Item = _CompatProperty("Item")
def _getcustomclass(self, name):
maybe_compatprop = getattr(type(self), name)
if isinstance(maybe_compatprop, _CompatProperty):
return getattr(__import__('pytest'), name)
else:
cls = getattr(self, name)
# TODO: reenable in the features branch
# warnings.warn("use of node.%s is deprecated, "
# "use pytest_pycollect_makeitem(...) to create custom "
# "collection nodes" % name, category=DeprecationWarning)
return cls
def __repr__(self):
return "<%s %r>" % (self.__class__.__name__,
getattr(self, 'name', None))
def warn(self, code, message):
""" generate a warning with the given code and message for this
item. """
assert isinstance(code, str)
fslocation = getattr(self, "location", None)
if fslocation is None:
fslocation = getattr(self, "fspath", None)
self.ihook.pytest_logwarning.call_historic(kwargs=dict(
code=code, message=message,
nodeid=self.nodeid, fslocation=fslocation))
# methods for ordering nodes
@property
def nodeid(self):
""" a ::-separated string denoting its collection tree address. """
return self._nodeid
def __hash__(self):
return hash(self.nodeid)
def setup(self):
pass
def teardown(self):
pass
def listchain(self):
""" return list of all parent collectors up to self,
starting from root of collection tree. """
chain = []
item = self
while item is not None:
chain.append(item)
item = item.parent
chain.reverse()
return chain
def add_marker(self, marker):
""" dynamically add a marker object to the node.
``marker`` can be a string or pytest.mark.* instance.
"""
from _pytest.mark import MarkDecorator, MARK_GEN
if isinstance(marker, six.string_types):
marker = getattr(MARK_GEN, marker)
elif not isinstance(marker, MarkDecorator):
raise ValueError("is not a string or pytest.mark.* Marker")
self.keywords[marker.name] = marker
self.own_markers.append(marker)
def iter_markers(self):
"""
iterate over all markers of the node
"""
return (x[1] for x in self.iter_markers_with_node())
def iter_markers_with_node(self):
"""
iterate over all markers of the node
returns sequence of tuples (node, mark)
"""
for node in reversed(self.listchain()):
for mark in node.own_markers:
yield node, mark
def get_marker(self, name):
""" get a marker object from this node or None if
the node doesn't have a marker with that name.
..warning::
deprecated
"""
markers = [x for x in self.iter_markers() if x.name == name]
if markers:
return MarkInfo(markers)
def listextrakeywords(self):
""" Return a set of all extra keywords in self and any parents."""
extra_keywords = set()
for item in self.listchain():
extra_keywords.update(item.extra_keyword_matches)
return extra_keywords
def listnames(self):
return [x.name for x in self.listchain()]
def addfinalizer(self, fin):
""" register a function to be called when this node is finalized.
This method can only be called when this node is active
in a setup chain, for example during self.setup().
"""
self.session._setupstate.addfinalizer(fin, self)
def getparent(self, cls):
""" get the next parent node (including ourself)
which is an instance of the given class"""
current = self
while current and not isinstance(current, cls):
current = current.parent
return current
def _prunetraceback(self, excinfo):
pass
def _repr_failure_py(self, excinfo, style=None):
fm = self.session._fixturemanager
if excinfo.errisinstance(fm.FixtureLookupError):
return excinfo.value.formatrepr()
tbfilter = True
if self.config.option.fulltrace:
style = "long"
else:
tb = _pytest._code.Traceback([excinfo.traceback[-1]])
self._prunetraceback(excinfo)
if len(excinfo.traceback) == 0:
excinfo.traceback = tb
tbfilter = False # prunetraceback already does it
if style == "auto":
style = "long"
# XXX should excinfo.getrepr record all data and toterminal() process it?
if style is None:
if self.config.option.tbstyle == "short":
style = "short"
else:
style = "long"
try:
os.getcwd()
abspath = False
except OSError:
abspath = True
return excinfo.getrepr(funcargs=True, abspath=abspath,
showlocals=self.config.option.showlocals,
style=style, tbfilter=tbfilter)
repr_failure = _repr_failure_py
class Collector(Node):
""" Collector instances create children through collect()
and thus iteratively build a tree.
"""
class CollectError(Exception):
""" an error during collection, contains a custom message. """
def collect(self):
""" returns a list of children (items and collectors)
for this collection node.
"""
raise NotImplementedError("abstract")
def repr_failure(self, excinfo):
""" represent a collection failure. """
if excinfo.errisinstance(self.CollectError):
exc = excinfo.value
return str(exc.args[0])
return self._repr_failure_py(excinfo, style="short")
def _prunetraceback(self, excinfo):
if hasattr(self, 'fspath'):
traceback = excinfo.traceback
ntraceback = traceback.cut(path=self.fspath)
if ntraceback == traceback:
ntraceback = ntraceback.cut(excludepath=tracebackcutdir)
excinfo.traceback = ntraceback.filter()
def _check_initialpaths_for_relpath(session, fspath):
for initial_path in session._initialpaths:
if fspath.common(initial_path) == initial_path:
return fspath.relto(initial_path.dirname)
class FSCollector(Collector):
def __init__(self, fspath, parent=None, config=None, session=None, nodeid=None):
fspath = py.path.local(fspath) # xxx only for test_resultlog.py?
name = fspath.basename
if parent is not None:
rel = fspath.relto(parent.fspath)
if rel:
name = rel
name = name.replace(os.sep, SEP)
self.fspath = fspath
session = session or parent.session
if nodeid is None:
nodeid = self.fspath.relto(session.config.rootdir)
if not nodeid:
nodeid = _check_initialpaths_for_relpath(session, fspath)
if os.sep != SEP:
nodeid = nodeid.replace(os.sep, SEP)
super(FSCollector, self).__init__(name, parent, config, session, nodeid=nodeid, fspath=fspath)
class File(FSCollector):
""" base class for collecting tests from a file. """
class Item(Node):
""" a basic test invocation item. Note that for a single function
there might be multiple test invocation items.
"""
nextitem = None
def __init__(self, name, parent=None, config=None, session=None, nodeid=None):
super(Item, self).__init__(name, parent, config, session, nodeid=nodeid)
self._report_sections = []
#: user properties is a list of tuples (name, value) that holds user
#: defined properties for this test.
self.user_properties = []
def add_report_section(self, when, key, content):
"""
Adds a new report section, similar to what's done internally to add stdout and
stderr captured output::
item.add_report_section("call", "stdout", "report section contents")
:param str when:
One of the possible capture states, ``"setup"``, ``"call"``, ``"teardown"``.
:param str key:
Name of the section, can be customized at will. Pytest uses ``"stdout"`` and
``"stderr"`` internally.
:param str content:
The full contents as a string.
"""
if content:
self._report_sections.append((when, key, content))
def reportinfo(self):
return self.fspath, None, ""
@property
def location(self):
try:
return self._location
except AttributeError:
location = self.reportinfo()
# bestrelpath is a quite slow function
cache = self.config.__dict__.setdefault("_bestrelpathcache", {})
try:
fspath = cache[location[0]]
except KeyError:
fspath = self.session.fspath.bestrelpath(location[0])
cache[location[0]] = fspath
location = (fspath, location[1], str(location[2]))
self._location = location
return location

View File

@ -3,7 +3,6 @@ from __future__ import absolute_import, division, print_function
import sys
import py
from _pytest import unittest, runner, python
from _pytest.config import hookimpl
@ -38,14 +37,15 @@ def pytest_runtest_setup(item):
if not call_optional(item.obj, 'setup'):
# call module level setup if there is no object level one
call_optional(item.parent.obj, 'setup')
#XXX this implies we only call teardown when setup worked
# XXX this implies we only call teardown when setup worked
item.session._setupstate.addfinalizer((lambda: teardown_nose(item)), item)
def teardown_nose(item):
if is_potential_nosetest(item):
if not call_optional(item.obj, 'teardown'):
call_optional(item.parent.obj, 'teardown')
#if hasattr(item.parent, '_nosegensetup'):
# if hasattr(item.parent, '_nosegensetup'):
# #call_optional(item._nosegensetup, 'teardown')
# del item.parent._nosegensetup
@ -65,7 +65,7 @@ def is_potential_nosetest(item):
def call_optional(obj, name):
method = getattr(obj, name, None)
isfixture = hasattr(method, "_pytestfixturefunction")
if method is not None and not isfixture and py.builtin.callable(method):
if method is not None and not isfixture and callable(method):
# If there's any problems allow the exception to raise rather than
# silently ignoring them
method()

147
_pytest/outcomes.py Normal file
View File

@ -0,0 +1,147 @@
"""
exception classes and constants handling test outcomes
as well as functions creating them
"""
from __future__ import absolute_import, division, print_function
import py
import sys
class OutcomeException(BaseException):
""" OutcomeException and its subclass instances indicate and
contain info about test and collection outcomes.
"""
def __init__(self, msg=None, pytrace=True):
BaseException.__init__(self, msg)
self.msg = msg
self.pytrace = pytrace
def __repr__(self):
if self.msg:
val = self.msg
if isinstance(val, bytes):
val = py._builtin._totext(val, errors='replace')
return val
return "<%s instance>" % (self.__class__.__name__,)
__str__ = __repr__
TEST_OUTCOME = (OutcomeException, Exception)
class Skipped(OutcomeException):
# XXX hackish: on 3k we fake to live in the builtins
# in order to have Skipped exception printing shorter/nicer
__module__ = 'builtins'
def __init__(self, msg=None, pytrace=True, allow_module_level=False):
OutcomeException.__init__(self, msg=msg, pytrace=pytrace)
self.allow_module_level = allow_module_level
class Failed(OutcomeException):
""" raised from an explicit call to pytest.fail() """
__module__ = 'builtins'
class Exit(KeyboardInterrupt):
""" raised for immediate program exits (no tracebacks/summaries)"""
def __init__(self, msg="unknown reason"):
self.msg = msg
KeyboardInterrupt.__init__(self, msg)
# exposed helper methods
def exit(msg):
""" exit testing process as if KeyboardInterrupt was triggered. """
__tracebackhide__ = True
raise Exit(msg)
exit.Exception = Exit
def skip(msg="", **kwargs):
""" skip an executing test with the given message. Note: it's usually
better to use the pytest.mark.skipif marker to declare a test to be
skipped under certain conditions like mismatching platforms or
dependencies. See the pytest_skipping plugin for details.
:kwarg bool allow_module_level: allows this function to be called at
module level, skipping the rest of the module. Default to False.
"""
__tracebackhide__ = True
allow_module_level = kwargs.pop('allow_module_level', False)
if kwargs:
keys = [k for k in kwargs.keys()]
raise TypeError('unexpected keyword arguments: {0}'.format(keys))
raise Skipped(msg=msg, allow_module_level=allow_module_level)
skip.Exception = Skipped
def fail(msg="", pytrace=True):
""" explicitly fail an currently-executing test with the given Message.
:arg pytrace: if false the msg represents the full failure information
and no python traceback will be reported.
"""
__tracebackhide__ = True
raise Failed(msg=msg, pytrace=pytrace)
fail.Exception = Failed
class XFailed(fail.Exception):
""" raised from an explicit call to pytest.xfail() """
def xfail(reason=""):
""" xfail an executing test or setup functions with the given reason."""
__tracebackhide__ = True
raise XFailed(reason)
xfail.Exception = XFailed
def importorskip(modname, minversion=None):
""" return imported module if it has at least "minversion" as its
__version__ attribute. If no minversion is specified the a skip
is only triggered if the module can not be imported.
"""
import warnings
__tracebackhide__ = True
compile(modname, '', 'eval') # to catch syntaxerrors
should_skip = False
with warnings.catch_warnings():
# make sure to ignore ImportWarnings that might happen because
# of existing directories with the same name we're trying to
# import but without a __init__.py file
warnings.simplefilter('ignore')
try:
__import__(modname)
except ImportError:
# Do not raise chained exception here(#1485)
should_skip = True
if should_skip:
raise Skipped("could not import %r" % (modname,), allow_module_level=True)
mod = sys.modules[modname]
if minversion is None:
return mod
verattr = getattr(mod, '__version__', None)
if minversion is not None:
try:
from pkg_resources import parse_version as pv
except ImportError:
raise Skipped("we have a required version for %r but can not import "
"pkg_resources to parse version strings." % (modname,),
allow_module_level=True)
if verattr is None or pv(verattr) < pv(minversion):
raise Skipped("module %r has __version__ %r, required is: %r" % (
modname, verattr, minversion), allow_module_level=True)
return mod

View File

@ -2,6 +2,7 @@
from __future__ import absolute_import, division, print_function
import pytest
import six
import sys
import tempfile
@ -9,14 +10,13 @@ import tempfile
def pytest_addoption(parser):
group = parser.getgroup("terminal reporting")
group._addoption('--pastebin', metavar="mode",
action='store', dest="pastebin", default=None,
choices=['failed', 'all'],
help="send failed|all info to bpaste.net pastebin service.")
action='store', dest="pastebin", default=None,
choices=['failed', 'all'],
help="send failed|all info to bpaste.net pastebin service.")
@pytest.hookimpl(trylast=True)
def pytest_configure(config):
import py
if config.option.pastebin == "all":
tr = config.pluginmanager.getplugin('terminalreporter')
# if no terminal reporter plugin is present, nothing we can do here;
@ -29,7 +29,7 @@ def pytest_configure(config):
def tee_write(s, **kwargs):
oldwrite(s, **kwargs)
if py.builtin._istext(s):
if isinstance(s, six.text_type):
s = s.encode('utf-8')
config._pastebinfile.write(s)
@ -97,4 +97,4 @@ def pytest_terminal_summary(terminalreporter):
s = tw.stringio.getvalue()
assert len(s)
pastebinurl = create_new_paste(s)
tr.write_line("%s --> %s" %(msg, pastebinurl))
tr.write_line("%s --> %s" % (msg, pastebinurl))

File diff suppressed because it is too large Load Diff

View File

@ -6,28 +6,41 @@ import inspect
import sys
import os
import collections
import warnings
from textwrap import dedent
from itertools import count
import py
import six
from _pytest.mark import MarkerError
from _pytest.config import hookimpl
import _pytest
import _pytest._pluggy as pluggy
import pluggy
from _pytest import fixtures
from _pytest import main
from _pytest import nodes
from _pytest import deprecated
from _pytest.compat import (
isclass, isfunction, is_generator, _escape_strings,
isclass, isfunction, is_generator, ascii_escaped,
REGEX_TYPE, STRING_TYPES, NoneType, NOTSET,
get_real_func, getfslineno, safe_getattr,
safe_str, getlocation, enum,
)
from _pytest.runner import fail
from _pytest.mark import transfer_markers
from _pytest.outcomes import fail
from _pytest.mark.structures import transfer_markers, get_unpacked_marks
cutdir1 = py.path.local(pluggy.__file__.rstrip("oc"))
cutdir2 = py.path.local(_pytest.__file__).dirpath()
cutdir3 = py.path.local(py.__file__).dirpath()
# relative paths that we use to filter traceback entries from appearing to the user;
# see filter_traceback
# note: if we need to add more paths than what we have now we should probably use a list
# for better maintenance
_pluggy_dir = py.path.local(pluggy.__file__.rstrip("oc"))
# pluggy is either a package or a single module depending on the version
if _pluggy_dir.basename == '__init__.py':
_pluggy_dir = _pluggy_dir.dirpath()
_pytest_dir = py.path.local(_pytest.__file__).dirpath()
_py_dir = py.path.local(py.__file__).dirpath()
def filter_traceback(entry):
@ -42,11 +55,10 @@ def filter_traceback(entry):
is_generated = '<' in raw_filename and '>' in raw_filename
if is_generated:
return False
# entry.path might point to an inexisting file, in which case it will
# alsso return a str object. see #1133
# entry.path might point to an non-existing file, in which case it will
# also return a str object. see #1133
p = py.path.local(entry.path)
return p != cutdir1 and not p.relto(cutdir2) and not p.relto(cutdir3)
return not p.relto(_pluggy_dir) and not p.relto(_pytest_dir) and not p.relto(_py_dir)
def pyobj_property(name):
@ -62,8 +74,8 @@ def pyobj_property(name):
def pytest_addoption(parser):
group = parser.getgroup("general")
group.addoption('--fixtures', '--funcargs',
action="store_true", dest="showfixtures", default=False,
help="show available fixtures, sorted by plugin appearance")
action="store_true", dest="showfixtures", default=False,
help="show available fixtures, sorted by plugin appearance")
group.addoption(
'--fixtures-per-test',
action="store_true",
@ -72,20 +84,20 @@ def pytest_addoption(parser):
help="show fixtures per test",
)
parser.addini("usefixtures", type="args", default=[],
help="list of default fixtures to be used with this project")
help="list of default fixtures to be used with this project")
parser.addini("python_files", type="args",
default=['test_*.py', '*_test.py'],
help="glob-style file patterns for Python test module discovery")
parser.addini("python_classes", type="args", default=["Test",],
help="prefixes or glob names for Python test class discovery")
parser.addini("python_functions", type="args", default=["test",],
help="prefixes or glob names for Python test function and "
"method discovery")
default=['test_*.py', '*_test.py'],
help="glob-style file patterns for Python test module discovery")
parser.addini("python_classes", type="args", default=["Test", ],
help="prefixes or glob names for Python test class discovery")
parser.addini("python_functions", type="args", default=["test", ],
help="prefixes or glob names for Python test function and "
"method discovery")
group.addoption("--import-mode", default="prepend",
choices=["prepend", "append"], dest="importmode",
help="prepend/append to sys.path when importing test modules, "
"default is to prepend.")
choices=["prepend", "append"], dest="importmode",
help="prepend/append to sys.path when importing test modules, "
"default is to prepend.")
def pytest_cmdline_main(config):
@ -105,28 +117,26 @@ def pytest_generate_tests(metafunc):
if hasattr(metafunc.function, attr):
msg = "{0} has '{1}', spelling should be 'parametrize'"
raise MarkerError(msg.format(metafunc.function.__name__, attr))
try:
markers = metafunc.function.parametrize
except AttributeError:
return
for marker in markers:
metafunc.parametrize(*marker.args, **marker.kwargs)
for marker in metafunc.definition.iter_markers():
if marker.name == 'parametrize':
metafunc.parametrize(*marker.args, **marker.kwargs)
def pytest_configure(config):
config.addinivalue_line("markers",
"parametrize(argnames, argvalues): call a test function multiple "
"times passing in different arguments in turn. argvalues generally "
"needs to be a list of values if argnames specifies only one name "
"or a list of tuples of values if argnames specifies multiple names. "
"Example: @parametrize('arg1', [1,2]) would lead to two calls of the "
"decorated test function, one with arg1=1 and another with arg1=2."
"see http://pytest.org/latest/parametrize.html for more info and "
"examples."
)
"parametrize(argnames, argvalues): call a test function multiple "
"times passing in different arguments in turn. argvalues generally "
"needs to be a list of values if argnames specifies only one name "
"or a list of tuples of values if argnames specifies multiple names. "
"Example: @parametrize('arg1', [1,2]) would lead to two calls of the "
"decorated test function, one with arg1=1 and another with arg1=2."
"see http://pytest.org/latest/parametrize.html for more info and "
"examples."
)
config.addinivalue_line("markers",
"usefixtures(fixturename1, fixturename2, ...): mark tests as needing "
"all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures "
)
"usefixtures(fixturename1, fixturename2, ...): mark tests as needing "
"all of the specified fixtures. see http://pytest.org/latest/fixture.html#usefixtures "
)
@hookimpl(trylast=True)
@ -151,13 +161,15 @@ def pytest_collect_file(path, parent):
if path.fnmatch(pat):
break
else:
return
return
ihook = parent.session.gethookproxy(path)
return ihook.pytest_pycollect_makemodule(path=path, parent=parent)
def pytest_pycollect_makemodule(path, parent):
return Module(path, parent)
@hookimpl(hookwrapper=True)
def pytest_pycollect_makeitem(collector, name, obj):
outcome = yield
@ -176,9 +188,8 @@ def pytest_pycollect_makeitem(collector, name, obj):
# or a funtools.wrapped.
# We musn't if it's been wrapped with mock.patch (python 2 only)
if not (isfunction(obj) or isfunction(get_real_func(obj))):
collector.warn(code="C2", message=
"cannot collect %r because it is not a function."
% name, )
collector.warn(code="C2", message="cannot collect %r because it is not a function."
% name, )
elif getattr(obj, "__test__", True):
if is_generator(obj):
res = Generator(name, parent=collector)
@ -186,22 +197,32 @@ def pytest_pycollect_makeitem(collector, name, obj):
res = list(collector._genfunctions(name, obj))
outcome.force_result(res)
def pytest_make_parametrize_id(config, val, argname=None):
return None
class PyobjContext(object):
module = pyobj_property("Module")
cls = pyobj_property("Class")
instance = pyobj_property("Instance")
class PyobjMixin(PyobjContext):
_ALLOW_MARKERS = True
def __init__(self, *k, **kw):
super(PyobjMixin, self).__init__(*k, **kw)
def obj():
def fget(self):
obj = getattr(self, '_obj', None)
if obj is None:
self._obj = obj = self._getobj()
# XXX evil hack
# used to avoid Instance collector marker duplication
if self._ALLOW_MARKERS:
self.own_markers.extend(get_unpacked_marks(self.obj))
return obj
def fset(self, value):
@ -253,7 +274,8 @@ class PyobjMixin(PyobjContext):
assert isinstance(lineno, int)
return fspath, lineno, modpath
class PyCollector(PyobjMixin, main.Collector):
class PyCollector(PyobjMixin, nodes.Collector):
def funcnamefilter(self, name):
return self._matches_prefix_or_glob_option('python_functions', name)
@ -271,10 +293,22 @@ class PyCollector(PyobjMixin, main.Collector):
return self._matches_prefix_or_glob_option('python_classes', name)
def istestfunction(self, obj, name):
return (
(self.funcnamefilter(name) or self.isnosetest(obj)) and
safe_getattr(obj, "__call__", False) and fixtures.getfixturemarker(obj) is None
)
if self.funcnamefilter(name) or self.isnosetest(obj):
if isinstance(obj, staticmethod):
# static methods need to be unwrapped
obj = safe_getattr(obj, '__func__', False)
if obj is False:
# Python 2.6 wraps in a different way that we won't try to handle
msg = "cannot collect static method %r because " \
"it is not a function (always the case in Python 2.6)"
self.warn(
code="C2", message=msg % name)
return False
return (
safe_getattr(obj, "__call__", False) and fixtures.getfixturemarker(obj) is None
)
else:
return False
def istestclass(self, obj, name):
return self.classnamefilter(name) or self.isnosetest(obj)
@ -305,23 +339,27 @@ class PyCollector(PyobjMixin, main.Collector):
for basecls in inspect.getmro(self.obj.__class__):
dicts.append(basecls.__dict__)
seen = {}
l = []
values = []
for dic in dicts:
for name, obj in list(dic.items()):
if name in seen:
continue
seen[name] = True
res = self.makeitem(name, obj)
res = self._makeitem(name, obj)
if res is None:
continue
if not isinstance(res, list):
res = [res]
l.extend(res)
l.sort(key=lambda item: item.reportinfo()[:2])
return l
values.extend(res)
values.sort(key=lambda item: item.reportinfo()[:2])
return values
def makeitem(self, name, obj):
#assert self.ihook.fspath == self.fspath, self
warnings.warn(deprecated.COLLECTOR_MAKEITEM, stacklevel=2)
self._makeitem(name, obj)
def _makeitem(self, name, obj):
# assert self.ihook.fspath == self.fspath, self
return self.ihook.pytest_pycollect_makeitem(
collector=self, name=name, obj=obj)
@ -331,9 +369,15 @@ class PyCollector(PyobjMixin, main.Collector):
cls = clscol and clscol.obj or None
transfer_markers(funcobj, cls, module)
fm = self.session._fixturemanager
fixtureinfo = fm.getfixtureinfo(self, funcobj, cls)
metafunc = Metafunc(funcobj, fixtureinfo, self.config,
cls=cls, module=module)
definition = FunctionDefinition(
name=name,
parent=self,
callobj=funcobj,
)
fixtureinfo = fm.getfixtureinfo(definition, funcobj, cls)
metafunc = Metafunc(definition, fixtureinfo, self.config, cls=cls, module=module)
methods = []
if hasattr(module, "pytest_generate_tests"):
methods.append(module.pytest_generate_tests)
@ -357,12 +401,12 @@ class PyCollector(PyobjMixin, main.Collector):
yield Function(name=subname, parent=self,
callspec=callspec, callobj=funcobj,
fixtureinfo=fixtureinfo,
keywords={callspec.id:True},
keywords={callspec.id: True},
originalname=name,
)
class Module(main.File, PyCollector):
class Module(nodes.File, PyCollector):
""" Collector for test classes and functions. """
def _getobj(self):
@ -390,7 +434,7 @@ class Module(main.File, PyCollector):
" %s\n"
"HINT: remove __pycache__ / .pyc files and/or use a "
"unique basename for your test file modules"
% e.args
% e.args
)
except ImportError:
from _pytest._code.code import ExceptionInfo
@ -409,9 +453,10 @@ class Module(main.File, PyCollector):
if e.allow_module_level:
raise
raise self.CollectError(
"Using pytest.skip outside of a test is not allowed. If you are "
"trying to decorate a test function, use the @pytest.mark.skip "
"or @pytest.mark.skipif decorators instead."
"Using pytest.skip outside of a test is not allowed. "
"To decorate a test function, use the @pytest.mark.skip "
"or @pytest.mark.skipif decorators instead, and to skip a "
"module use `pytestmark = pytest.mark.{skip,skipif}."
)
self.config.pluginmanager.consider_module(mod)
return mod
@ -462,12 +507,13 @@ def _get_xunit_func(obj, name):
class Class(PyCollector):
""" Collector for test methods. """
def collect(self):
if not safe_getattr(self.obj, "__test__", True):
return []
if hasinit(self.obj):
self.warn("C1", "cannot collect test class %r because it has a "
"__init__ constructor" % self.obj.__name__)
"__init__ constructor" % self.obj.__name__)
return []
elif hasnew(self.obj):
self.warn("C1", "cannot collect test class %r because it has a "
@ -488,7 +534,13 @@ class Class(PyCollector):
fin_class = getattr(fin_class, '__func__', fin_class)
self.addfinalizer(lambda: fin_class(self.obj))
class Instance(PyCollector):
_ALLOW_MARKERS = False # hack, destroy later
# instances share the object with their parents in a way
# that duplicates markers instances if not taken out
# can be removed at node strucutre reorganization time
def _getobj(self):
return self.parent.obj()
@ -500,6 +552,7 @@ class Instance(PyCollector):
self.obj = self._getobj()
return self.obj
class FunctionMixin(PyobjMixin):
""" mixin for the code common to Function and Generator.
"""
@ -535,7 +588,6 @@ class FunctionMixin(PyobjMixin):
if ntraceback == traceback:
ntraceback = ntraceback.cut(path=path)
if ntraceback == traceback:
#ntraceback = ntraceback.cut(excludepath=cutdir2)
ntraceback = ntraceback.filter(filter_traceback)
if not ntraceback:
ntraceback = traceback
@ -553,7 +605,7 @@ class FunctionMixin(PyobjMixin):
if not excinfo.value.pytrace:
return py._builtin._totext(excinfo.value)
return super(FunctionMixin, self)._repr_failure_py(excinfo,
style=style)
style=style)
def repr_failure(self, excinfo, outerr=None):
assert outerr is None, "XXX outerr usage is deprecated"
@ -572,28 +624,28 @@ class Generator(FunctionMixin, PyCollector):
self.session._setupstate.prepare(self)
# see FunctionMixin.setup and test_setupstate_is_preserved_134
self._preservedparent = self.parent.obj
l = []
values = []
seen = {}
for i, x in enumerate(self.obj()):
name, call, args = self.getcallargs(x)
if not callable(call):
raise TypeError("%r yielded non callable test %r" %(self.obj, call,))
raise TypeError("%r yielded non callable test %r" % (self.obj, call,))
if name is None:
name = "[%d]" % i
else:
name = "['%s']" % name
if name in seen:
raise ValueError("%r generated tests with non-unique name %r" %(self, name))
raise ValueError("%r generated tests with non-unique name %r" % (self, name))
seen[name] = True
l.append(self.Function(name, self, args=args, callobj=call))
self.config.warn('C1', deprecated.YIELD_TESTS, fslocation=self.fspath)
return l
values.append(self.Function(name, self, args=args, callobj=call))
self.warn('C1', deprecated.YIELD_TESTS)
return values
def getcallargs(self, obj):
if not isinstance(obj, (tuple, list)):
obj = (obj,)
# explicit naming
if isinstance(obj[0], py.builtin._basestring):
if isinstance(obj[0], six.string_types):
name = obj[0]
obj = obj[1:]
else:
@ -624,14 +676,14 @@ class CallSpec2(object):
self._globalid_args = set()
self._globalparam = NOTSET
self._arg2scopenum = {} # used for sorting parametrized resources
self.keywords = {}
self.marks = []
self.indices = {}
def copy(self, metafunc):
cs = CallSpec2(self.metafunc)
cs.funcargs.update(self.funcargs)
cs.params.update(self.params)
cs.keywords.update(self.keywords)
cs.marks.extend(self.marks)
cs.indices.update(self.indices)
cs._arg2scopenum.update(self._arg2scopenum)
cs._idlist = list(self._idlist)
@ -642,7 +694,7 @@ class CallSpec2(object):
def _checkargnotcontained(self, arg):
if arg in self.params or arg in self.funcargs:
raise ValueError("duplicate %r" %(arg,))
raise ValueError("duplicate %r" % (arg,))
def getparam(self, name):
try:
@ -656,16 +708,16 @@ class CallSpec2(object):
def id(self):
return "-".join(map(str, filter(None, self._idlist)))
def setmulti(self, valtypes, argnames, valset, id, keywords, scopenum,
param_index):
for arg,val in zip(argnames, valset):
def setmulti2(self, valtypes, argnames, valset, id, marks, scopenum,
param_index):
for arg, val in zip(argnames, valset):
self._checkargnotcontained(arg)
valtype_for_arg = valtypes[arg]
getattr(self, valtype_for_arg)[arg] = val
self.indices[arg] = param_index
self._arg2scopenum[arg] = scopenum
self._idlist.append(id)
self.keywords.update(keywords)
self.marks.extend(marks)
def setall(self, funcargs, id, param):
for x in funcargs:
@ -682,20 +734,23 @@ class CallSpec2(object):
class Metafunc(fixtures.FuncargnamesCompatAttr):
"""
Metafunc objects are passed to the ``pytest_generate_tests`` hook.
Metafunc objects are passed to the :func:`pytest_generate_tests <_pytest.hookspec.pytest_generate_tests>` hook.
They help to inspect a test function and to generate tests according to
test configuration or values specified in the class or module where a
test function is defined.
"""
def __init__(self, function, fixtureinfo, config, cls=None, module=None):
def __init__(self, definition, fixtureinfo, config, cls=None, module=None):
#: access to the :class:`_pytest.config.Config` object for the test session
assert isinstance(definition, FunctionDefinition) or type(definition).__name__ == "DefinitionMock"
self.definition = definition
self.config = config
#: the module object where the test function is defined in.
self.module = module
#: underlying python test function
self.function = function
self.function = definition.obj
#: set of fixture names required by the test function
self.fixturenames = fixtureinfo.names_closure
@ -704,11 +759,11 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
self.cls = cls
self._calls = []
self._ids = py.builtin.set()
self._ids = set()
self._arg2fixturedefs = fixtureinfo.name2fixturedefs
def parametrize(self, argnames, argvalues, indirect=False, ids=None,
scope=None):
scope=None):
""" Add new invocations to the underlying test function using the list
of argvalues for the given argnames. Parametrization is performed
during the collection phase. If you need to setup expensive resources
@ -747,30 +802,13 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
to set a dynamic scope using test context or configuration.
"""
from _pytest.fixtures import scope2index
from _pytest.mark import MARK_GEN, ParameterSet
from _pytest.mark import ParameterSet
from py.io import saferepr
if not isinstance(argnames, (tuple, list)):
argnames = [x.strip() for x in argnames.split(",") if x.strip()]
force_tuple = len(argnames) == 1
else:
force_tuple = False
parameters = [
ParameterSet.extract_from(x, legacy_force_tuple=force_tuple)
for x in argvalues]
argnames, parameters = ParameterSet._for_parametrize(
argnames, argvalues, self.function, self.config)
del argvalues
if not parameters:
fs, lineno = getfslineno(self.function)
reason = "got empty parameter set %r, function %s at %s:%d" % (
argnames, self.function.__name__, fs, lineno)
mark = MARK_GEN.skip(reason=reason)
parameters.append(ParameterSet(
values=(NOTSET,) * len(argnames),
marks=[mark],
id=None,
))
if scope is None:
scope = _find_parametrized_scope(argnames, self._arg2fixturedefs, indirect)
@ -784,7 +822,7 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
name = 'fixture' if indirect else 'argument'
raise ValueError(
"%r uses no %s %r" % (
self.function, name, arg))
self.function, name, arg))
if indirect is True:
valtypes = dict.fromkeys(argnames, "params")
@ -806,7 +844,7 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
raise ValueError('%d tests specified with %d ids' % (
len(parameters), len(ids)))
for id_value in ids:
if id_value is not None and not isinstance(id_value, py.builtin._basestring):
if id_value is not None and not isinstance(id_value, six.string_types):
msg = 'ids must be list of strings, found: %s (type: %s)'
raise ValueError(msg % (saferepr(id_value), type(id_value).__name__))
ids = idmaker(argnames, parameters, idfn, ids, self.config)
@ -820,15 +858,19 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
'equal to the number of names ({1})'.format(
param.values, argnames))
newcallspec = callspec.copy(self)
newcallspec.setmulti(valtypes, argnames, param.values, a_id,
param.deprecated_arg_dict, scopenum, param_index)
newcallspec.setmulti2(valtypes, argnames, param.values, a_id,
param.marks, scopenum, param_index)
newcalls.append(newcallspec)
self._calls = newcalls
def addcall(self, funcargs=None, id=NOTSET, param=NOTSET):
""" (deprecated, use parametrize) Add a new call to the underlying
test function during the collection phase of a test run. Note that
request.addcall() is called during the test collection phase prior and
""" Add a new call to the underlying test function during the collection phase of a test run.
.. deprecated:: 3.3
Use :meth:`parametrize` instead.
Note that request.addcall() is called during the test collection phase prior and
independently to actual test execution. You should only use addcall()
if you need to specify multiple arguments of a test function.
@ -841,6 +883,8 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
:arg param: a parameter which will be exposed to a later fixture function
invocation through the ``request.param`` attribute.
"""
if self.config:
self.config.warn('C1', message=deprecated.METAFUNC_ADD_CALL, fslocation=None)
assert funcargs is None or isinstance(funcargs, dict)
if funcargs is not None:
for name in funcargs:
@ -875,7 +919,7 @@ def _find_parametrized_scope(argnames, arg2fixturedefs, indirect):
from _pytest.fixtures import scopes
indirect_as_list = isinstance(indirect, (list, tuple))
all_arguments_are_fixtures = indirect is True or \
indirect_as_list and len(indirect) == argnames
indirect_as_list and len(indirect) == argnames
if all_arguments_are_fixtures:
fixturedefs = arg2fixturedefs or {}
used_scopes = [fixturedef[0].scope for name, fixturedef in fixturedefs.items()]
@ -900,7 +944,7 @@ def _idval(val, argname, idx, idfn, config=None):
msg += '\nUpdate your code as this will raise an error in pytest-4.0.'
warnings.warn(msg, DeprecationWarning)
if s:
return _escape_strings(s)
return ascii_escaped(s)
if config:
hook_id = config.hook.pytest_make_parametrize_id(
@ -909,16 +953,16 @@ def _idval(val, argname, idx, idfn, config=None):
return hook_id
if isinstance(val, STRING_TYPES):
return _escape_strings(val)
return ascii_escaped(val)
elif isinstance(val, (float, int, bool, NoneType)):
return str(val)
elif isinstance(val, REGEX_TYPE):
return _escape_strings(val.pattern)
return ascii_escaped(val.pattern)
elif enum is not None and isinstance(val, enum.Enum):
return str(val)
elif isclass(val) and hasattr(val, '__name__'):
elif (isclass(val) or isfunction(val)) and hasattr(val, '__name__'):
return val.__name__
return str(argname)+str(idx)
return str(argname) + str(idx)
def _idvalset(idx, parameterset, argnames, idfn, ids, config=None):
@ -929,7 +973,7 @@ def _idvalset(idx, parameterset, argnames, idfn, ids, config=None):
for val, argname in zip(parameterset.values, argnames)]
return "-".join(this_id)
else:
return _escape_strings(ids[idx])
return ascii_escaped(ids[idx])
def idmaker(argnames, parametersets, idfn=None, ids=None, config=None):
@ -958,52 +1002,48 @@ def _show_fixtures_per_test(config, session):
tw = _pytest.config.create_terminal_writer(config)
verbose = config.getvalue("verbose")
def get_best_rel(func):
def get_best_relpath(func):
loc = getlocation(func, curdir)
return curdir.bestrelpath(loc)
def write_fixture(fixture_def):
argname = fixture_def.argname
if verbose <= 0 and argname.startswith("_"):
return
if verbose > 0:
bestrel = get_best_rel(fixture_def.func)
bestrel = get_best_relpath(fixture_def.func)
funcargspec = "{0} -- {1}".format(argname, bestrel)
else:
funcargspec = argname
tw.line(funcargspec, green=True)
INDENT = ' {0}'
fixture_doc = fixture_def.func.__doc__
if fixture_doc:
for line in fixture_doc.strip().split('\n'):
tw.line(INDENT.format(line.strip()))
write_docstring(tw, fixture_doc)
else:
tw.line(INDENT.format('no docstring available'), red=True)
tw.line(' no docstring available', red=True)
def write_item(item):
name2fixturedefs = item._fixtureinfo.name2fixturedefs
if not name2fixturedefs:
# The given test item does not use any fixtures
try:
info = item._fixtureinfo
except AttributeError:
# doctests items have no _fixtureinfo attribute
return
if not info.name2fixturedefs:
# this test item does not use any fixtures
return
bestrel = get_best_rel(item.function)
tw.line()
tw.sep('-', 'fixtures used by {0}'.format(item.name))
tw.sep('-', '({0})'.format(bestrel))
for argname, fixture_defs in sorted(name2fixturedefs.items()):
assert fixture_defs is not None
if not fixture_defs:
tw.sep('-', '({0})'.format(get_best_relpath(item.function)))
# dict key not used in loop but needed for sorting
for _, fixturedefs in sorted(info.name2fixturedefs.items()):
assert fixturedefs is not None
if not fixturedefs:
continue
# The last fixture def item in the list is expected
# to be the one used by the test item
write_fixture(fixture_defs[-1])
# last item is expected to be the one used by the test item
write_fixture(fixturedefs[-1])
for item in session.items:
write_item(item)
for session_item in session.items:
write_item(session_item)
def showfixtures(config):
@ -1043,35 +1083,48 @@ def _showfixtures_main(config, session):
if currentmodule != module:
if not module.startswith("_pytest."):
tw.line()
tw.sep("-", "fixtures defined from %s" %(module,))
tw.sep("-", "fixtures defined from %s" % (module,))
currentmodule = module
if verbose <= 0 and argname[0] == "_":
continue
if verbose > 0:
funcargspec = "%s -- %s" %(argname, bestrel,)
funcargspec = "%s -- %s" % (argname, bestrel,)
else:
funcargspec = argname
tw.line(funcargspec, green=True)
loc = getlocation(fixturedef.func, curdir)
doc = fixturedef.func.__doc__ or ""
if doc:
for line in doc.strip().split("\n"):
tw.line(" " + line.strip())
write_docstring(tw, doc)
else:
tw.line(" %s: no docstring available" %(loc,),
red=True)
tw.line(" %s: no docstring available" % (loc,),
red=True)
def write_docstring(tw, doc):
INDENT = " "
doc = doc.rstrip()
if "\n" in doc:
firstline, rest = doc.split("\n", 1)
else:
firstline, rest = doc, ""
#
# the basic pytest Function item
#
if firstline.strip():
tw.line(INDENT + firstline.strip())
class Function(FunctionMixin, main.Item, fixtures.FuncargnamesCompatAttr):
if rest:
for line in dedent(rest).split("\n"):
tw.write(INDENT + line + "\n")
class Function(FunctionMixin, nodes.Item, fixtures.FuncargnamesCompatAttr):
""" a Function Item is responsible for setting up and executing a
Python test function.
"""
_genid = None
# disable since functions handle it themselfes
_ALLOW_MARKERS = False
def __init__(self, name, parent, args=None, config=None,
callspec=None, callobj=NOTSET, keywords=None, session=None,
fixtureinfo=None, originalname=None):
@ -1082,9 +1135,17 @@ class Function(FunctionMixin, main.Item, fixtures.FuncargnamesCompatAttr):
self.obj = callobj
self.keywords.update(self.obj.__dict__)
self.own_markers.extend(get_unpacked_marks(self.obj))
if callspec:
self.callspec = callspec
self.keywords.update(callspec.keywords)
# this is total hostile and a mess
# keywords are broken by design by now
# this will be redeemed later
for mark in callspec.marks:
# feel free to cry, this was broken for years before
# and keywords cant fix it per design
self.keywords[mark.name] = mark
self.own_markers.extend(callspec.marks)
if keywords:
self.keywords.update(keywords)
@ -1123,7 +1184,7 @@ class Function(FunctionMixin, main.Item, fixtures.FuncargnamesCompatAttr):
def _getobj(self):
name = self.name
i = name.find("[") # parametrization
i = name.find("[") # parametrization
if i != -1:
name = name[:i]
return getattr(self.parent.obj, name)
@ -1143,3 +1204,15 @@ class Function(FunctionMixin, main.Item, fixtures.FuncargnamesCompatAttr):
def setup(self):
super(Function, self).setup()
fixtures.fillfixtures(self)
class FunctionDefinition(Function):
"""
internal hack until we get actual definition nodes instead of the
crappy metafunc hack
"""
def runtest(self):
raise RuntimeError("function definitions are not supposed to be used")
setup = runtest

View File

@ -2,14 +2,278 @@ import math
import sys
import py
from six import binary_type, text_type
from six.moves import zip, filterfalse
from more_itertools.more import always_iterable
from _pytest.compat import isclass
from _pytest.runner import fail
from _pytest.outcomes import fail
import _pytest._code
def _cmp_raises_type_error(self, other):
"""__cmp__ implementation which raises TypeError. Used
by Approx base classes to implement only == and != and raise a
TypeError for other comparisons.
Needed in Python 2 only, Python 3 all it takes is not implementing the
other operators at all.
"""
__tracebackhide__ = True
raise TypeError('Comparison operators other than == and != not supported by approx objects')
# builtin pytest.approx helper
class approx(object):
class ApproxBase(object):
"""
Provide shared utilities for making approximate comparisons between numbers
or sequences of numbers.
"""
# Tell numpy to use our `__eq__` operator instead of its
__array_ufunc__ = None
__array_priority__ = 100
def __init__(self, expected, rel=None, abs=None, nan_ok=False):
self.expected = expected
self.abs = abs
self.rel = rel
self.nan_ok = nan_ok
def __repr__(self):
raise NotImplementedError
def __eq__(self, actual):
return all(
a == self._approx_scalar(x)
for a, x in self._yield_comparisons(actual))
__hash__ = None
def __ne__(self, actual):
return not (actual == self)
if sys.version_info[0] == 2:
__cmp__ = _cmp_raises_type_error
def _approx_scalar(self, x):
return ApproxScalar(x, rel=self.rel, abs=self.abs, nan_ok=self.nan_ok)
def _yield_comparisons(self, actual):
"""
Yield all the pairs of numbers to be compared. This is used to
implement the `__eq__` method.
"""
raise NotImplementedError
class ApproxNumpy(ApproxBase):
"""
Perform approximate comparisons for numpy arrays.
"""
def __repr__(self):
# It might be nice to rewrite this function to account for the
# shape of the array...
import numpy as np
return "approx({0!r})".format(list(
self._approx_scalar(x) for x in np.asarray(self.expected)))
if sys.version_info[0] == 2:
__cmp__ = _cmp_raises_type_error
def __eq__(self, actual):
import numpy as np
# self.expected is supposed to always be an array here
if not np.isscalar(actual):
try:
actual = np.asarray(actual)
except: # noqa
raise TypeError("cannot compare '{0}' to numpy.ndarray".format(actual))
if not np.isscalar(actual) and actual.shape != self.expected.shape:
return False
return ApproxBase.__eq__(self, actual)
def _yield_comparisons(self, actual):
import numpy as np
# `actual` can either be a numpy array or a scalar, it is treated in
# `__eq__` before being passed to `ApproxBase.__eq__`, which is the
# only method that calls this one.
if np.isscalar(actual):
for i in np.ndindex(self.expected.shape):
yield actual, np.asscalar(self.expected[i])
else:
for i in np.ndindex(self.expected.shape):
yield np.asscalar(actual[i]), np.asscalar(self.expected[i])
class ApproxMapping(ApproxBase):
"""
Perform approximate comparisons for mappings where the values are numbers
(the keys can be anything).
"""
def __repr__(self):
return "approx({0!r})".format(dict(
(k, self._approx_scalar(v))
for k, v in self.expected.items()))
def __eq__(self, actual):
if set(actual.keys()) != set(self.expected.keys()):
return False
return ApproxBase.__eq__(self, actual)
def _yield_comparisons(self, actual):
for k in self.expected.keys():
yield actual[k], self.expected[k]
class ApproxSequence(ApproxBase):
"""
Perform approximate comparisons for sequences of numbers.
"""
def __repr__(self):
seq_type = type(self.expected)
if seq_type not in (tuple, list, set):
seq_type = list
return "approx({0!r})".format(seq_type(
self._approx_scalar(x) for x in self.expected))
def __eq__(self, actual):
if len(actual) != len(self.expected):
return False
return ApproxBase.__eq__(self, actual)
def _yield_comparisons(self, actual):
return zip(actual, self.expected)
class ApproxScalar(ApproxBase):
"""
Perform approximate comparisons for single numbers only.
"""
DEFAULT_ABSOLUTE_TOLERANCE = 1e-12
DEFAULT_RELATIVE_TOLERANCE = 1e-6
def __repr__(self):
"""
Return a string communicating both the expected value and the tolerance
for the comparison being made, e.g. '1.0 +- 1e-6'. Use the unicode
plus/minus symbol if this is python3 (it's too hard to get right for
python2).
"""
if isinstance(self.expected, complex):
return str(self.expected)
# Infinities aren't compared using tolerances, so don't show a
# tolerance.
if math.isinf(self.expected):
return str(self.expected)
# If a sensible tolerance can't be calculated, self.tolerance will
# raise a ValueError. In this case, display '???'.
try:
vetted_tolerance = '{:.1e}'.format(self.tolerance)
except ValueError:
vetted_tolerance = '???'
if sys.version_info[0] == 2:
return '{0} +- {1}'.format(self.expected, vetted_tolerance)
else:
return u'{0} \u00b1 {1}'.format(self.expected, vetted_tolerance)
def __eq__(self, actual):
"""
Return true if the given value is equal to the expected value within
the pre-specified tolerance.
"""
if _is_numpy_array(actual):
return ApproxNumpy(actual, self.abs, self.rel, self.nan_ok) == self.expected
# Short-circuit exact equality.
if actual == self.expected:
return True
# Allow the user to control whether NaNs are considered equal to each
# other or not. The abs() calls are for compatibility with complex
# numbers.
if math.isnan(abs(self.expected)):
return self.nan_ok and math.isnan(abs(actual))
# Infinity shouldn't be approximately equal to anything but itself, but
# if there's a relative tolerance, it will be infinite and infinity
# will seem approximately equal to everything. The equal-to-itself
# case would have been short circuited above, so here we can just
# return false if the expected value is infinite. The abs() call is
# for compatibility with complex numbers.
if math.isinf(abs(self.expected)):
return False
# Return true if the two numbers are within the tolerance.
return abs(self.expected - actual) <= self.tolerance
__hash__ = None
@property
def tolerance(self):
"""
Return the tolerance for the comparison. This could be either an
absolute tolerance or a relative tolerance, depending on what the user
specified or which would be larger.
"""
def set_default(x, default):
return x if x is not None else default
# Figure out what the absolute tolerance should be. ``self.abs`` is
# either None or a value specified by the user.
absolute_tolerance = set_default(self.abs, self.DEFAULT_ABSOLUTE_TOLERANCE)
if absolute_tolerance < 0:
raise ValueError("absolute tolerance can't be negative: {}".format(absolute_tolerance))
if math.isnan(absolute_tolerance):
raise ValueError("absolute tolerance can't be NaN.")
# If the user specified an absolute tolerance but not a relative one,
# just return the absolute tolerance.
if self.rel is None:
if self.abs is not None:
return absolute_tolerance
# Figure out what the relative tolerance should be. ``self.rel`` is
# either None or a value specified by the user. This is done after
# we've made sure the user didn't ask for an absolute tolerance only,
# because we don't want to raise errors about the relative tolerance if
# we aren't even going to use it.
relative_tolerance = set_default(self.rel, self.DEFAULT_RELATIVE_TOLERANCE) * abs(self.expected)
if relative_tolerance < 0:
raise ValueError("relative tolerance can't be negative: {}".format(absolute_tolerance))
if math.isnan(relative_tolerance):
raise ValueError("relative tolerance can't be NaN.")
# Return the larger of the relative and absolute tolerances.
return max(relative_tolerance, absolute_tolerance)
class ApproxDecimal(ApproxScalar):
from decimal import Decimal
DEFAULT_ABSOLUTE_TOLERANCE = Decimal('1e-12')
DEFAULT_RELATIVE_TOLERANCE = Decimal('1e-6')
def approx(expected, rel=None, abs=None, nan_ok=False):
"""
Assert that two numbers (or two sets of numbers) are equal to each other
within some tolerance.
@ -45,21 +309,42 @@ class approx(object):
>>> 0.1 + 0.2 == approx(0.3)
True
The same syntax also works on sequences of numbers::
The same syntax also works for sequences of numbers::
>>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6))
True
Dictionary *values*::
>>> {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == approx({'a': 0.3, 'b': 0.6})
True
``numpy`` arrays::
>>> import numpy as np # doctest: +SKIP
>>> np.array([0.1, 0.2]) + np.array([0.2, 0.4]) == approx(np.array([0.3, 0.6])) # doctest: +SKIP
True
And for a ``numpy`` array against a scalar::
>>> import numpy as np # doctest: +SKIP
>>> np.array([0.1, 0.2]) + np.array([0.2, 0.1]) == approx(0.3) # doctest: +SKIP
True
By default, ``approx`` considers numbers within a relative tolerance of
``1e-6`` (i.e. one part in a million) of its expected value to be equal.
This treatment would lead to surprising results if the expected value was
``0.0``, because nothing but ``0.0`` itself is relatively close to ``0.0``.
To handle this case less surprisingly, ``approx`` also considers numbers
within an absolute tolerance of ``1e-12`` of its expected value to be
equal. Infinite numbers are another special case. They are only
considered equal to themselves, regardless of the relative tolerance. Both
the relative and absolute tolerances can be changed by passing arguments to
the ``approx`` constructor::
equal. Infinity and NaN are special cases. Infinity is only considered
equal to itself, regardless of the relative tolerance. NaN is not
considered equal to anything by default, but you can make it be equal to
itself by setting the ``nan_ok`` argument to True. (This is meant to
facilitate comparing arrays that use NaN to mean "no data".)
Both the relative and absolute tolerances can be changed by passing
arguments to the ``approx`` constructor::
>>> 1.0001 == approx(1)
False
@ -121,140 +406,75 @@ class approx(object):
is asymmetric and you can think of ``b`` as the reference value. In the
special case that you explicitly specify an absolute tolerance but not a
relative tolerance, only the absolute tolerance is considered.
.. warning::
.. versionchanged:: 3.2
In order to avoid inconsistent behavior, ``TypeError`` is
raised for ``>``, ``>=``, ``<`` and ``<=`` comparisons.
The example below illustrates the problem::
assert approx(0.1) > 0.1 + 1e-10 # calls approx(0.1).__gt__(0.1 + 1e-10)
assert 0.1 + 1e-10 > approx(0.1) # calls approx(0.1).__lt__(0.1 + 1e-10)
In the second example one expects ``approx(0.1).__le__(0.1 + 1e-10)``
to be called. But instead, ``approx(0.1).__lt__(0.1 + 1e-10)`` is used to
comparison. This is because the call hierarchy of rich comparisons
follows a fixed behavior. `More information...`__
__ https://docs.python.org/3/reference/datamodel.html#object.__ge__
"""
def __init__(self, expected, rel=None, abs=None):
self.expected = expected
self.abs = abs
self.rel = rel
from collections import Mapping, Sequence
from _pytest.compat import STRING_TYPES as String
from decimal import Decimal
def __repr__(self):
return ', '.join(repr(x) for x in self.expected)
# Delegate the comparison to a class that knows how to deal with the type
# of the expected value (e.g. int, float, list, dict, numpy.array, etc).
#
# This architecture is really driven by the need to support numpy arrays.
# The only way to override `==` for arrays without requiring that approx be
# the left operand is to inherit the approx object from `numpy.ndarray`.
# But that can't be a general solution, because it requires (1) numpy to be
# installed and (2) the expected value to be a numpy array. So the general
# solution is to delegate each type of expected value to a different class.
#
# This has the advantage that it made it easy to support mapping types
# (i.e. dict). The old code accepted mapping types, but would only compare
# their keys, which is probably not what most people would expect.
def __eq__(self, actual):
from collections import Iterable
if not isinstance(actual, Iterable):
actual = [actual]
if len(actual) != len(self.expected):
return False
return all(a == x for a, x in zip(actual, self.expected))
if _is_numpy_array(expected):
cls = ApproxNumpy
elif isinstance(expected, Mapping):
cls = ApproxMapping
elif isinstance(expected, Sequence) and not isinstance(expected, String):
cls = ApproxSequence
elif isinstance(expected, Decimal):
cls = ApproxDecimal
else:
cls = ApproxScalar
__hash__ = None
def __ne__(self, actual):
return not (actual == self)
@property
def expected(self):
# Regardless of whether the user-specified expected value is a number
# or a sequence of numbers, return a list of ApproxNotIterable objects
# that can be compared against.
from collections import Iterable
approx_non_iter = lambda x: ApproxNonIterable(x, self.rel, self.abs)
if isinstance(self._expected, Iterable):
return [approx_non_iter(x) for x in self._expected]
else:
return [approx_non_iter(self._expected)]
@expected.setter
def expected(self, expected):
self._expected = expected
return cls(expected, rel, abs, nan_ok)
class ApproxNonIterable(object):
def _is_numpy_array(obj):
"""
Perform approximate comparisons for single numbers only.
In other words, the ``expected`` attribute for objects of this class must
be some sort of number. This is in contrast to the ``approx`` class, where
the ``expected`` attribute can either be a number of a sequence of numbers.
This class is responsible for making comparisons, while ``approx`` is
responsible for abstracting the difference between numbers and sequences of
numbers. Although this class can stand on its own, it's only meant to be
used within ``approx``.
Return true if the given object is a numpy array. Make a special effort to
avoid importing numpy unless it's really necessary.
"""
import inspect
def __init__(self, expected, rel=None, abs=None):
self.expected = expected
self.abs = abs
self.rel = rel
for cls in inspect.getmro(type(obj)):
if cls.__module__ == 'numpy':
try:
import numpy as np
return isinstance(obj, np.ndarray)
except ImportError:
pass
def __repr__(self):
if isinstance(self.expected, complex):
return str(self.expected)
return False
# Infinities aren't compared using tolerances, so don't show a
# tolerance.
if math.isinf(self.expected):
return str(self.expected)
# If a sensible tolerance can't be calculated, self.tolerance will
# raise a ValueError. In this case, display '???'.
try:
vetted_tolerance = '{:.1e}'.format(self.tolerance)
except ValueError:
vetted_tolerance = '???'
if sys.version_info[0] == 2:
return '{0} +- {1}'.format(self.expected, vetted_tolerance)
else:
return u'{0} \u00b1 {1}'.format(self.expected, vetted_tolerance)
def __eq__(self, actual):
# Short-circuit exact equality.
if actual == self.expected:
return True
# Infinity shouldn't be approximately equal to anything but itself, but
# if there's a relative tolerance, it will be infinite and infinity
# will seem approximately equal to everything. The equal-to-itself
# case would have been short circuited above, so here we can just
# return false if the expected value is infinite. The abs() call is
# for compatibility with complex numbers.
if math.isinf(abs(self.expected)):
return False
# Return true if the two numbers are within the tolerance.
return abs(self.expected - actual) <= self.tolerance
__hash__ = None
def __ne__(self, actual):
return not (actual == self)
@property
def tolerance(self):
set_default = lambda x, default: x if x is not None else default
# Figure out what the absolute tolerance should be. ``self.abs`` is
# either None or a value specified by the user.
absolute_tolerance = set_default(self.abs, 1e-12)
if absolute_tolerance < 0:
raise ValueError("absolute tolerance can't be negative: {}".format(absolute_tolerance))
if math.isnan(absolute_tolerance):
raise ValueError("absolute tolerance can't be NaN.")
# If the user specified an absolute tolerance but not a relative one,
# just return the absolute tolerance.
if self.rel is None:
if self.abs is not None:
return absolute_tolerance
# Figure out what the relative tolerance should be. ``self.rel`` is
# either None or a value specified by the user. This is done after
# we've made sure the user didn't ask for an absolute tolerance only,
# because we don't want to raise errors about the relative tolerance if
# we aren't even going to use it.
relative_tolerance = set_default(self.rel, 1e-6) * abs(self.expected)
if relative_tolerance < 0:
raise ValueError("relative tolerance can't be negative: {}".format(absolute_tolerance))
if math.isnan(relative_tolerance):
raise ValueError("relative tolerance can't be NaN.")
# Return the larger of the relative and absolute tolerances.
return max(relative_tolerance, absolute_tolerance)
# builtin pytest.raises helper
@ -263,10 +483,13 @@ def raises(expected_exception, *args, **kwargs):
Assert that a code block/function call raises ``expected_exception``
and raise a failure exception otherwise.
:arg message: if specified, provides a custom failure message if the
exception is not raised
:arg match: if specified, asserts that the exception matches a text or regex
This helper produces a ``ExceptionInfo()`` object (see below).
If using Python 2.5 or above, you may use this function as a
context manager::
You may use this function as a context manager::
>>> with raises(ZeroDivisionError):
... 1/0
@ -282,7 +505,6 @@ def raises(expected_exception, *args, **kwargs):
...
Failed: Expecting ZeroDivisionError
.. note::
When using ``pytest.raises`` as a context manager, it's worthwhile to
@ -306,7 +528,8 @@ def raises(expected_exception, *args, **kwargs):
...
>>> assert exc_info.type == ValueError
Or you can use the keyword argument ``match`` to assert that the
Since version ``3.1`` you can use the keyword argument ``match`` to assert that the
exception matches a text or regex::
>>> with raises(ValueError, match='must be 0 or None'):
@ -315,8 +538,12 @@ def raises(expected_exception, *args, **kwargs):
>>> with raises(ValueError, match=r'must be \d+$'):
... raise ValueError("value must be 42")
**Legacy forms**
Or you can specify a callable by passing a to-be-called lambda::
The forms below are fully supported but are discouraged for new code because the
context manager form is regarded as more readable and less error-prone.
It is possible to specify a callable by passing a to-be-called lambda::
>>> raises(ZeroDivisionError, lambda: 1/0)
<ExceptionInfo ...>
@ -330,13 +557,17 @@ def raises(expected_exception, *args, **kwargs):
>>> raises(ZeroDivisionError, f, x=0)
<ExceptionInfo ...>
A third possibility is to use a string to be executed::
It is also possible to pass a string to be evaluated at runtime::
>>> raises(ZeroDivisionError, "f(0)")
<ExceptionInfo ...>
.. autoclass:: _pytest._code.ExceptionInfo
:members:
The string will be evaluated using the same ``locals()`` and ``globals()``
at the moment of the ``raises`` call.
.. currentmodule:: _pytest._code
Consult the API of ``excinfo`` objects: :class:`ExceptionInfo`.
.. note::
Similar to caught exception objects in Python, explicitly clearing
@ -354,14 +585,11 @@ def raises(expected_exception, *args, **kwargs):
"""
__tracebackhide__ = True
msg = ("exceptions must be old-style classes or"
" derived from BaseException, not %s")
if isinstance(expected_exception, tuple):
for exc in expected_exception:
if not isclass(exc):
raise TypeError(msg % type(exc))
elif not isclass(expected_exception):
raise TypeError(msg % type(expected_exception))
base_type = (type, text_type, binary_type)
for exc in filterfalse(isclass, always_iterable(expected_exception, base_type)):
msg = ("exceptions must be old-style classes or"
" derived from BaseException, not %s")
raise TypeError(msg % type(exc))
message = "DID NOT RAISE {0}".format(expected_exception)
match_expr = None
@ -371,7 +599,10 @@ def raises(expected_exception, *args, **kwargs):
message = kwargs.pop("message")
if "match" in kwargs:
match_expr = kwargs.pop("match")
message += " matching '{0}'".format(match_expr)
if kwargs:
msg = 'Unexpected keyword arguments passed to pytest.raises: '
msg += ', '.join(kwargs.keys())
raise TypeError(msg)
return RaisesContext(expected_exception, message, match_expr)
elif isinstance(args[0], str):
code, = args
@ -379,7 +610,7 @@ def raises(expected_exception, *args, **kwargs):
frame = sys._getframe(1)
loc = frame.f_locals.copy()
loc.update(kwargs)
#print "raises frame scope: %r" % frame.f_locals
# print "raises frame scope: %r" % frame.f_locals
try:
code = _pytest._code.Source(code).compile()
py.builtin.exec_(code, frame.f_globals, loc)
@ -414,17 +645,10 @@ class RaisesContext(object):
__tracebackhide__ = True
if tp[0] is None:
fail(self.message)
if sys.version_info < (2, 7):
# py26: on __exit__() exc_value often does not contain the
# exception value.
# http://bugs.python.org/issue7853
if not isinstance(tp[1], BaseException):
exc_type, value, traceback = tp
tp = exc_type, exc_type(value), traceback
self.excinfo.__init__(tp)
suppress_exception = issubclass(self.excinfo.type, self.expected_exception)
if sys.version_info[0] == 2 and suppress_exception:
sys.exc_clear()
if self.match_expr:
if self.match_expr and suppress_exception:
self.excinfo.match(self.match_expr)
return suppress_exception

View File

@ -7,15 +7,16 @@ import _pytest._code
import py
import sys
import warnings
import re
from _pytest.fixtures import yield_fixture
from _pytest.outcomes import fail
@yield_fixture
def recwarn():
"""Return a WarningsRecorder instance that provides these methods:
* ``pop(category=None)``: return last warning matching the category.
* ``clear()``: clear list of warnings
"""Return a :class:`WarningsRecorder` instance that records all warnings emitted by test functions.
See http://docs.python.org/library/warnings.html for information
on warning categories.
@ -84,11 +85,11 @@ class _DeprecatedCallContext(object):
def warns(expected_warning, *args, **kwargs):
"""Assert that code raises a particular class of warning.
Specifically, the input @expected_warning can be a warning class or
tuple of warning classes, and the code must return that warning
(if a single class) or one of those warnings (if a tuple).
Specifically, the parameter ``expected_warning`` can be a warning class or
sequence of warning classes, and the inside the ``with`` block must issue a warning of that class or
classes.
This helper produces a list of ``warnings.WarningMessage`` objects,
This helper produces a list of :class:`warnings.WarningMessage` objects,
one for each warning raised.
This function can be used as a context manager, or any of the other ways
@ -96,10 +97,28 @@ def warns(expected_warning, *args, **kwargs):
>>> with warns(RuntimeWarning):
... warnings.warn("my warning", RuntimeWarning)
In the context manager form you may use the keyword argument ``match`` to assert
that the exception matches a text or regex::
>>> with warns(UserWarning, match='must be 0 or None'):
... warnings.warn("value must be 0 or None", UserWarning)
>>> with warns(UserWarning, match=r'must be \d+$'):
... warnings.warn("value must be 42", UserWarning)
>>> with warns(UserWarning, match=r'must be \d+$'):
... warnings.warn("this is not here", UserWarning)
Traceback (most recent call last):
...
Failed: DID NOT WARN. No warnings of type ...UserWarning... was emitted...
"""
wcheck = WarningsChecker(expected_warning)
match_expr = None
if not args:
return wcheck
if "match" in kwargs:
match_expr = kwargs.pop("match")
return WarningsChecker(expected_warning, match_expr=match_expr)
elif isinstance(args[0], str):
code, = args
assert isinstance(code, str)
@ -107,12 +126,12 @@ def warns(expected_warning, *args, **kwargs):
loc = frame.f_locals.copy()
loc.update(kwargs)
with wcheck:
with WarningsChecker(expected_warning, match_expr=match_expr):
code = _pytest._code.Source(code).compile()
py.builtin.exec_(code, frame.f_globals, loc)
else:
func = args[0]
with wcheck:
with WarningsChecker(expected_warning, match_expr=match_expr):
return func(*args[1:], **kwargs)
@ -172,7 +191,7 @@ class WarningsRecorder(warnings.catch_warnings):
class WarningsChecker(WarningsRecorder):
def __init__(self, expected_warning=None):
def __init__(self, expected_warning=None, match_expr=None):
super(WarningsChecker, self).__init__()
msg = ("exceptions must be old-style classes or "
@ -187,6 +206,7 @@ class WarningsChecker(WarningsRecorder):
raise TypeError(msg % type(expected_warning))
self.expected_warning = expected_warning
self.match_expr = match_expr
def __exit__(self, *exc_info):
super(WarningsChecker, self).__exit__(*exc_info)
@ -197,8 +217,17 @@ class WarningsChecker(WarningsRecorder):
if not any(issubclass(r.category, self.expected_warning)
for r in self):
__tracebackhide__ = True
from _pytest.runner import fail
fail("DID NOT WARN. No warnings of type {0} was emitted. "
"The list of emitted warnings is: {1}.".format(
self.expected_warning,
[each.message for each in self]))
self.expected_warning,
[each.message for each in self]))
elif self.match_expr is not None:
for r in self:
if issubclass(r.category, self.expected_warning):
if re.compile(self.match_expr).search(str(r.message)):
break
else:
fail("DID NOT WARN. No warnings of type {0} matching"
" ('{1}') was emitted. The list of emitted warnings"
" is: {2}.".format(self.expected_warning, self.match_expr,
[each.message for each in self]))

View File

@ -6,11 +6,13 @@ from __future__ import absolute_import, division, print_function
import py
import os
def pytest_addoption(parser):
group = parser.getgroup("terminal reporting", "resultlog plugin options")
group.addoption('--resultlog', '--result-log', action="store",
metavar="path", default=None,
help="DEPRECATED path for machine-readable result log.")
metavar="path", default=None,
help="DEPRECATED path for machine-readable result log.")
def pytest_configure(config):
resultlog = config.option.resultlog
@ -19,13 +21,14 @@ def pytest_configure(config):
dirname = os.path.dirname(os.path.abspath(resultlog))
if not os.path.isdir(dirname):
os.makedirs(dirname)
logfile = open(resultlog, 'w', 1) # line buffered
logfile = open(resultlog, 'w', 1) # line buffered
config._resultlog = ResultLog(config, logfile)
config.pluginmanager.register(config._resultlog)
from _pytest.deprecated import RESULT_LOG
config.warn('C1', RESULT_LOG)
def pytest_unconfigure(config):
resultlog = getattr(config, '_resultlog', None)
if resultlog:
@ -33,6 +36,7 @@ def pytest_unconfigure(config):
del config._resultlog
config.pluginmanager.unregister(resultlog)
def generic_path(item):
chain = item.listchain()
gpath = [chain[0].name]
@ -56,10 +60,11 @@ def generic_path(item):
fspath = newfspath
return ''.join(gpath)
class ResultLog(object):
def __init__(self, config, logfile):
self.config = config
self.logfile = logfile # preferably line buffered
self.logfile = logfile # preferably line buffered
def write_log_entry(self, testpath, lettercode, longrepr):
print("%s %s" % (lettercode, testpath), file=self.logfile)

View File

@ -2,22 +2,24 @@
from __future__ import absolute_import, division, print_function
import bdb
import os
import sys
from time import time
import py
from _pytest._code.code import TerminalRepr, ExceptionInfo
from _pytest.outcomes import skip, Skipped, TEST_OUTCOME
#
# pytest plugin hooks
def pytest_addoption(parser):
group = parser.getgroup("terminal reporting", "reporting", after="general")
group.addoption('--durations',
action="store", type=int, default=None, metavar="N",
help="show N slowest setup/test durations (N=0 for all)."),
action="store", type=int, default=None, metavar="N",
help="show N slowest setup/test durations (N=0 for all)."),
def pytest_terminal_summary(terminalreporter):
durations = terminalreporter.config.option.durations
@ -42,24 +44,28 @@ def pytest_terminal_summary(terminalreporter):
for rep in dlist:
nodeid = rep.nodeid.replace("::()::", "::")
tr.write_line("%02.2fs %-8s %s" %
(rep.duration, rep.when, nodeid))
(rep.duration, rep.when, nodeid))
def pytest_sessionstart(session):
session._setupstate = SetupState()
def pytest_sessionfinish(session):
session._setupstate.teardown_all()
class NodeInfo:
def __init__(self, location):
self.location = location
def pytest_runtest_protocol(item, nextitem):
item.ihook.pytest_runtest_logstart(
nodeid=item.nodeid, location=item.location,
)
runtestprotocol(item, nextitem=nextitem)
item.ihook.pytest_runtest_logfinish(
nodeid=item.nodeid, location=item.location,
)
return True
def runtestprotocol(item, log=True, nextitem=None):
hasrequest = hasattr(item, "_request")
if hasrequest and not item._request:
@ -72,7 +78,7 @@ def runtestprotocol(item, log=True, nextitem=None):
if not item.config.option.setuponly:
reports.append(call_and_report(item, "call", log))
reports.append(call_and_report(item, "teardown", log,
nextitem=nextitem))
nextitem=nextitem))
# after all teardown hooks have been called
# want funcargs and request info to go away
if hasrequest:
@ -80,6 +86,7 @@ def runtestprotocol(item, log=True, nextitem=None):
item.funcargs = None
return reports
def show_test_item(item):
"""Show test function, parameters and the fixtures of the test item."""
tw = item.config.get_terminal_writer()
@ -90,10 +97,14 @@ def show_test_item(item):
if used_fixtures:
tw.write(' (fixtures used: {0})'.format(', '.join(used_fixtures)))
def pytest_runtest_setup(item):
_update_current_test_var(item, 'setup')
item.session._setupstate.prepare(item)
def pytest_runtest_call(item):
_update_current_test_var(item, 'call')
try:
item.runtest()
except Exception:
@ -106,8 +117,28 @@ def pytest_runtest_call(item):
del tb # Get rid of it in this namespace
raise
def pytest_runtest_teardown(item, nextitem):
_update_current_test_var(item, 'teardown')
item.session._setupstate.teardown_exact(item, nextitem)
_update_current_test_var(item, None)
def _update_current_test_var(item, when):
"""
Update PYTEST_CURRENT_TEST to reflect the current item and stage.
If ``when`` is None, delete PYTEST_CURRENT_TEST from the environment.
"""
var_name = 'PYTEST_CURRENT_TEST'
if when:
value = '{0} ({1})'.format(item.nodeid, when)
# don't allow null bytes on environment variables (see #2644, #2957)
value = value.replace('\x00', '(null)')
os.environ[var_name] = value
else:
os.environ.pop(var_name)
def pytest_report_teststatus(report):
if report.when in ("setup", "teardown"):
@ -133,21 +164,25 @@ def call_and_report(item, when, log=True, **kwds):
hook.pytest_exception_interact(node=item, call=call, report=report)
return report
def check_interactive_exception(call, report):
return call.excinfo and not (
hasattr(report, "wasxfail") or
call.excinfo.errisinstance(skip.Exception) or
call.excinfo.errisinstance(bdb.BdbQuit))
hasattr(report, "wasxfail") or
call.excinfo.errisinstance(skip.Exception) or
call.excinfo.errisinstance(bdb.BdbQuit))
def call_runtest_hook(item, when, **kwds):
hookname = "pytest_runtest_" + when
ihook = getattr(item.ihook, hookname)
return CallInfo(lambda: ihook(item=item, **kwds), when=when)
class CallInfo:
class CallInfo(object):
""" Result/Exception info a function invocation. """
#: None or ExceptionInfo object.
excinfo = None
def __init__(self, func, when):
#: context of invocation: one of "setup", "call",
#: "teardown", "memocollect"
@ -158,7 +193,7 @@ class CallInfo:
except KeyboardInterrupt:
self.stop = time()
raise
except:
except: # noqa
self.excinfo = ExceptionInfo()
self.stop = time()
@ -169,6 +204,7 @@ class CallInfo:
status = "result: %r" % (self.result,)
return "<CallInfo when=%r %s>" % (self.when, status)
def getslaveinfoline(node):
try:
return node._slaveinfocache
@ -179,6 +215,7 @@ def getslaveinfoline(node):
d['id'], d['sysplatform'], ver, d['executable'])
return s
class BaseReport(object):
def __init__(self, **kw):
@ -219,6 +256,14 @@ class BaseReport(object):
exc = tw.stringio.getvalue()
return exc.strip()
@property
def caplog(self):
"""Return captured log lines, if log capturing is enabled
.. versionadded:: 3.5
"""
return '\n'.join(content for (prefix, content) in self.get_sections('Captured log'))
@property
def capstdout(self):
"""Return captured text from stdout, if capturing is enabled
@ -243,10 +288,11 @@ class BaseReport(object):
def fspath(self):
return self.nodeid.split("::")[0]
def pytest_runtest_makereport(item, call):
when = call.when
duration = call.stop-call.start
keywords = dict([(x,1) for x in item.keywords])
duration = call.stop - call.start
keywords = dict([(x, 1) for x in item.keywords])
excinfo = call.excinfo
sections = []
if not call.excinfo:
@ -264,21 +310,23 @@ def pytest_runtest_makereport(item, call):
outcome = "failed"
if call.when == "call":
longrepr = item.repr_failure(excinfo)
else: # exception in setup or teardown
else: # exception in setup or teardown
longrepr = item._repr_failure_py(excinfo,
style=item.config.option.tbstyle)
style=item.config.option.tbstyle)
for rwhen, key, content in item._report_sections:
sections.append(("Captured %s %s" %(key, rwhen), content))
sections.append(("Captured %s %s" % (key, rwhen), content))
return TestReport(item.nodeid, item.location,
keywords, outcome, longrepr, when,
sections, duration)
sections, duration, user_properties=item.user_properties)
class TestReport(BaseReport):
""" Basic test report object (also used for setup and teardown calls if
they fail).
"""
def __init__(self, nodeid, location, keywords, outcome,
longrepr, when, sections=(), duration=0, **extra):
longrepr, when, sections=(), duration=0, user_properties=(), **extra):
#: normalized collection node id
self.nodeid = nodeid
@ -300,6 +348,10 @@ class TestReport(BaseReport):
#: one of 'setup', 'call', 'teardown' to indicate runtest phase.
self.when = when
#: user properties is a list of tuples (name, value) that holds user
#: defined properties of the test
self.user_properties = user_properties
#: list of pairs ``(str, str)`` of extra information which needs to
#: marshallable. Used by pytest to add captured text
#: from ``stdout`` and ``stderr``, but may be used by other plugins
@ -315,14 +367,17 @@ class TestReport(BaseReport):
return "<TestReport %r when=%r outcome=%r>" % (
self.nodeid, self.when, self.outcome)
class TeardownErrorReport(BaseReport):
outcome = "failed"
when = "teardown"
def __init__(self, longrepr, **extra):
self.longrepr = longrepr
self.sections = []
self.__dict__.update(extra)
def pytest_make_collect_report(collector):
call = CallInfo(
lambda: list(collector.collect()),
@ -344,7 +399,7 @@ def pytest_make_collect_report(collector):
errorinfo = CollectErrorRepr(errorinfo)
longrepr = errorinfo
rep = CollectReport(collector.nodeid, outcome, longrepr,
getattr(call, 'result', None))
getattr(call, 'result', None))
rep.call = call # see collect_one_node
return rep
@ -365,16 +420,20 @@ class CollectReport(BaseReport):
def __repr__(self):
return "<CollectReport %r lenresult=%s outcome=%r>" % (
self.nodeid, len(self.result), self.outcome)
self.nodeid, len(self.result), self.outcome)
class CollectErrorRepr(TerminalRepr):
def __init__(self, msg):
self.longrepr = msg
def toterminal(self, out):
out.line(self.longrepr, red=True)
class SetupState(object):
""" shared state for setting up/tearing down test items or collectors. """
def __init__(self):
self.stack = []
self._finalizers = {}
@ -385,8 +444,8 @@ class SetupState(object):
is called at the end of teardown_all().
"""
assert colitem and not isinstance(colitem, tuple)
assert py.builtin.callable(finalizer)
#assert colitem in self.stack # some unit tests don't setup stack :/
assert callable(finalizer)
# assert colitem in self.stack # some unit tests don't setup stack :/
self._finalizers.setdefault(colitem, []).append(finalizer)
def _pop_and_teardown(self):
@ -400,7 +459,7 @@ class SetupState(object):
fin = finalizers.pop()
try:
fin()
except Exception:
except TEST_OUTCOME:
# XXX Only first exception will be seen by user,
# ideally all should be reported.
if exc is None:
@ -414,7 +473,7 @@ class SetupState(object):
colitem.teardown()
for colitem in self._finalizers:
assert colitem is None or colitem in self.stack \
or isinstance(colitem, tuple)
or isinstance(colitem, tuple)
def teardown_all(self):
while self.stack:
@ -447,10 +506,11 @@ class SetupState(object):
self.stack.append(col)
try:
col.setup()
except Exception:
except TEST_OUTCOME:
col._prepare_exc = sys.exc_info()
raise
def collect_one_node(collector):
ihook = collector.ihook
ihook.pytest_collectstart(collector=collector)
@ -459,122 +519,3 @@ def collect_one_node(collector):
if call and check_interactive_exception(call, rep):
ihook.pytest_exception_interact(node=collector, call=call, report=rep)
return rep
# =============================================================
# Test OutcomeExceptions and helpers for creating them.
class OutcomeException(Exception):
""" OutcomeException and its subclass instances indicate and
contain info about test and collection outcomes.
"""
def __init__(self, msg=None, pytrace=True):
Exception.__init__(self, msg)
self.msg = msg
self.pytrace = pytrace
def __repr__(self):
if self.msg:
val = self.msg
if isinstance(val, bytes):
val = py._builtin._totext(val, errors='replace')
return val
return "<%s instance>" %(self.__class__.__name__,)
__str__ = __repr__
class Skipped(OutcomeException):
# XXX hackish: on 3k we fake to live in the builtins
# in order to have Skipped exception printing shorter/nicer
__module__ = 'builtins'
def __init__(self, msg=None, pytrace=True, allow_module_level=False):
OutcomeException.__init__(self, msg=msg, pytrace=pytrace)
self.allow_module_level = allow_module_level
class Failed(OutcomeException):
""" raised from an explicit call to pytest.fail() """
__module__ = 'builtins'
class Exit(KeyboardInterrupt):
""" raised for immediate program exits (no tracebacks/summaries)"""
def __init__(self, msg="unknown reason"):
self.msg = msg
KeyboardInterrupt.__init__(self, msg)
# exposed helper methods
def exit(msg):
""" exit testing process as if KeyboardInterrupt was triggered. """
__tracebackhide__ = True
raise Exit(msg)
exit.Exception = Exit
def skip(msg=""):
""" skip an executing test with the given message. Note: it's usually
better to use the pytest.mark.skipif marker to declare a test to be
skipped under certain conditions like mismatching platforms or
dependencies. See the pytest_skipping plugin for details.
"""
__tracebackhide__ = True
raise Skipped(msg=msg)
skip.Exception = Skipped
def fail(msg="", pytrace=True):
""" explicitly fail an currently-executing test with the given Message.
:arg pytrace: if false the msg represents the full failure information
and no python traceback will be reported.
"""
__tracebackhide__ = True
raise Failed(msg=msg, pytrace=pytrace)
fail.Exception = Failed
def importorskip(modname, minversion=None):
""" return imported module if it has at least "minversion" as its
__version__ attribute. If no minversion is specified the a skip
is only triggered if the module can not be imported.
"""
import warnings
__tracebackhide__ = True
compile(modname, '', 'eval') # to catch syntaxerrors
should_skip = False
with warnings.catch_warnings():
# make sure to ignore ImportWarnings that might happen because
# of existing directories with the same name we're trying to
# import but without a __init__.py file
warnings.simplefilter('ignore')
try:
__import__(modname)
except ImportError:
# Do not raise chained exception here(#1485)
should_skip = True
if should_skip:
raise Skipped("could not import %r" %(modname,), allow_module_level=True)
mod = sys.modules[modname]
if minversion is None:
return mod
verattr = getattr(mod, '__version__', None)
if minversion is not None:
try:
from pkg_resources import parse_version as pv
except ImportError:
raise Skipped("we have a required version for %r but can not import "
"pkg_resources to parse version strings." % (modname,),
allow_module_level=True)
if verattr is None or pv(verattr) < pv(minversion):
raise Skipped("module %r has __version__ %r, required is: %r" %(
modname, verattr, minversion), allow_module_level=True)
return mod

View File

@ -44,7 +44,7 @@ def _show_fixture_action(fixturedef, msg):
config = fixturedef._fixturemanager.config
capman = config.pluginmanager.getplugin('capturemanager')
if capman:
out, err = capman.suspendcapture()
out, err = capman.suspend_global_capture()
tw = config.get_terminal_writer()
tw.line()
@ -63,7 +63,7 @@ def _show_fixture_action(fixturedef, msg):
tw.write('[{0}]'.format(fixturedef.cached_param))
if capman:
capman.resumecapture()
capman.resume_global_capture()
sys.stdout.write(out)
sys.stderr.write(err)

View File

@ -1,26 +1,22 @@
""" support for skip/xfail functions and markers. """
from __future__ import absolute_import, division, print_function
import os
import sys
import traceback
import py
from _pytest.config import hookimpl
from _pytest.mark import MarkInfo, MarkDecorator
from _pytest.runner import fail, skip
from _pytest.mark.evaluate import MarkEvaluator
from _pytest.outcomes import fail, skip, xfail
def pytest_addoption(parser):
group = parser.getgroup("general")
group.addoption('--runxfail',
action="store_true", dest="runxfail", default=False,
help="run tests even if they are marked xfail")
action="store_true", dest="runxfail", default=False,
help="run tests even if they are marked xfail")
parser.addini("xfail_strict", "default for the strict parameter of xfail "
"markers when not given explicitly (default: "
"False)",
default=False,
type="bool")
parser.addini("xfail_strict",
"default for the strict parameter of xfail "
"markers when not given explicitly (default: False)",
default=False,
type="bool")
def pytest_configure(config):
@ -33,151 +29,45 @@ def pytest_configure(config):
def nop(*args, **kwargs):
pass
nop.Exception = XFailed
nop.Exception = xfail.Exception
setattr(pytest, "xfail", nop)
config.addinivalue_line("markers",
"skip(reason=None): skip the given test function with an optional reason. "
"Example: skip(reason=\"no way of currently testing this\") skips the "
"test."
)
"skip(reason=None): skip the given test function with an optional reason. "
"Example: skip(reason=\"no way of currently testing this\") skips the "
"test."
)
config.addinivalue_line("markers",
"skipif(condition): skip the given test function if eval(condition) "
"results in a True value. Evaluation happens within the "
"module global context. Example: skipif('sys.platform == \"win32\"') "
"skips the test if we are on the win32 platform. see "
"http://pytest.org/latest/skipping.html"
)
"skipif(condition): skip the given test function if eval(condition) "
"results in a True value. Evaluation happens within the "
"module global context. Example: skipif('sys.platform == \"win32\"') "
"skips the test if we are on the win32 platform. see "
"http://pytest.org/latest/skipping.html"
)
config.addinivalue_line("markers",
"xfail(condition, reason=None, run=True, raises=None, strict=False): "
"mark the test function as an expected failure if eval(condition) "
"has a True value. Optionally specify a reason for better reporting "
"and run=False if you don't even want to execute the test function. "
"If only specific exception(s) are expected, you can list them in "
"raises, and if the test fails in other ways, it will be reported as "
"a true failure. See http://pytest.org/latest/skipping.html"
)
class XFailed(fail.Exception):
""" raised from an explicit call to pytest.xfail() """
def xfail(reason=""):
""" xfail an executing test or setup functions with the given reason."""
__tracebackhide__ = True
raise XFailed(reason)
xfail.Exception = XFailed
class MarkEvaluator:
def __init__(self, item, name):
self.item = item
self.name = name
@property
def holder(self):
return self.item.keywords.get(self.name)
def __bool__(self):
return bool(self.holder)
__nonzero__ = __bool__
def wasvalid(self):
return not hasattr(self, 'exc')
def invalidraise(self, exc):
raises = self.get('raises')
if not raises:
return
return not isinstance(exc, raises)
def istrue(self):
try:
return self._istrue()
except Exception:
self.exc = sys.exc_info()
if isinstance(self.exc[1], SyntaxError):
msg = [" " * (self.exc[1].offset + 4) + "^", ]
msg.append("SyntaxError: invalid syntax")
else:
msg = traceback.format_exception_only(*self.exc[:2])
fail("Error evaluating %r expression\n"
" %s\n"
"%s"
% (self.name, self.expr, "\n".join(msg)),
pytrace=False)
def _getglobals(self):
d = {'os': os, 'sys': sys, 'config': self.item.config}
if hasattr(self.item, 'obj'):
d.update(self.item.obj.__globals__)
return d
def _istrue(self):
if hasattr(self, 'result'):
return self.result
if self.holder:
if self.holder.args or 'condition' in self.holder.kwargs:
self.result = False
# "holder" might be a MarkInfo or a MarkDecorator; only
# MarkInfo keeps track of all parameters it received in an
# _arglist attribute
marks = getattr(self.holder, '_marks', None) \
or [self.holder.mark]
for _, args, kwargs in marks:
if 'condition' in kwargs:
args = (kwargs['condition'],)
for expr in args:
self.expr = expr
if isinstance(expr, py.builtin._basestring):
d = self._getglobals()
result = cached_eval(self.item.config, expr, d)
else:
if "reason" not in kwargs:
# XXX better be checked at collection time
msg = "you need to specify reason=STRING " \
"when using booleans as conditions."
fail(msg)
result = bool(expr)
if result:
self.result = True
self.reason = kwargs.get('reason', None)
self.expr = expr
return self.result
else:
self.result = True
return getattr(self, 'result', False)
def get(self, attr, default=None):
return self.holder.kwargs.get(attr, default)
def getexplanation(self):
expl = getattr(self, 'reason', None) or self.get('reason', None)
if not expl:
if not hasattr(self, 'expr'):
return ""
else:
return "condition: " + str(self.expr)
return expl
"xfail(condition, reason=None, run=True, raises=None, strict=False): "
"mark the test function as an expected failure if eval(condition) "
"has a True value. Optionally specify a reason for better reporting "
"and run=False if you don't even want to execute the test function. "
"If only specific exception(s) are expected, you can list them in "
"raises, and if the test fails in other ways, it will be reported as "
"a true failure. See http://pytest.org/latest/skipping.html"
)
@hookimpl(tryfirst=True)
def pytest_runtest_setup(item):
# Check if skip or skipif are specified as pytest marks
item._skipped_by_mark = False
eval_skipif = MarkEvaluator(item, 'skipif')
if eval_skipif.istrue():
item._skipped_by_mark = True
skip(eval_skipif.getexplanation())
skipif_info = item.keywords.get('skipif')
if isinstance(skipif_info, (MarkInfo, MarkDecorator)):
eval_skipif = MarkEvaluator(item, 'skipif')
if eval_skipif.istrue():
item._evalskip = eval_skipif
skip(eval_skipif.getexplanation())
skip_info = item.keywords.get('skip')
if isinstance(skip_info, (MarkInfo, MarkDecorator)):
item._evalskip = True
for skip_info in item.iter_markers():
if skip_info.name != 'skip':
continue
item._skipped_by_mark = True
if 'reason' in skip_info.kwargs:
skip(skip_info.kwargs['reason'])
elif skip_info.args:
@ -224,7 +114,6 @@ def pytest_runtest_makereport(item, call):
outcome = yield
rep = outcome.get_result()
evalxfail = getattr(item, '_evalxfail', None)
evalskip = getattr(item, '_evalskip', None)
# unitttest special case, see setting of _unexpectedsuccess
if hasattr(item, '_unexpectedsuccess') and rep.when == "call":
from _pytest.compat import _is_unittest_unexpected_success_a_failure
@ -238,12 +127,12 @@ def pytest_runtest_makereport(item, call):
rep.outcome = "passed"
rep.wasxfail = rep.longrepr
elif item.config.option.runxfail:
pass # don't interefere
pass # don't interefere
elif call.excinfo and call.excinfo.errisinstance(xfail.Exception):
rep.wasxfail = "reason: " + call.excinfo.value.msg
rep.outcome = "skipped"
elif evalxfail and not rep.skipped and evalxfail.wasvalid() and \
evalxfail.istrue():
evalxfail.istrue():
if call.excinfo:
if evalxfail.invalidraise(call.excinfo.value):
rep.outcome = "failed"
@ -260,7 +149,7 @@ def pytest_runtest_makereport(item, call):
else:
rep.outcome = "passed"
rep.wasxfail = explanation
elif evalskip is not None and rep.skipped and type(rep.longrepr) is tuple:
elif getattr(item, '_skipped_by_mark', False) and rep.skipped and type(rep.longrepr) is tuple:
# skipped by mark.skipif; change the location of the failure
# to point to the item definition, otherwise it will display
# the location of where the skip exception was raised within pytest
@ -268,7 +157,10 @@ def pytest_runtest_makereport(item, call):
filename, line = item.location[:2]
rep.longrepr = filename, line, reason
# called by terminalreporter progress reporting
def pytest_report_teststatus(report):
if hasattr(report, "wasxfail"):
if report.skipped:
@ -276,11 +168,14 @@ def pytest_report_teststatus(report):
elif report.passed:
return "xpassed", "X", ("XPASS", {'yellow': True})
# called by the terminalreporter instance/plugin
def pytest_terminal_summary(terminalreporter):
tr = terminalreporter
if not tr.reportchars:
#for name in "xfailed skipped failed xpassed":
# for name in "xfailed skipped failed xpassed":
# if not tr.stats.get(name, 0):
# tr.write_line("HINT: use '-r' option to see extra "
# "summary info about tests")
@ -289,18 +184,8 @@ def pytest_terminal_summary(terminalreporter):
lines = []
for char in tr.reportchars:
if char == "x":
show_xfailed(terminalreporter, lines)
elif char == "X":
show_xpassed(terminalreporter, lines)
elif char in "fF":
show_simple(terminalreporter, lines, 'failed', "FAIL %s")
elif char in "sS":
show_skipped(terminalreporter, lines)
elif char == "E":
show_simple(terminalreporter, lines, 'error', "ERROR %s")
elif char == 'p':
show_simple(terminalreporter, lines, 'passed', "PASSED %s")
action = REPORTCHAR_ACTIONS.get(char, lambda tr, lines: None)
action(terminalreporter, lines)
if lines:
tr._tw.sep("=", "short test summary info")
@ -336,45 +221,65 @@ def show_xpassed(terminalreporter, lines):
lines.append("XPASS %s %s" % (pos, reason))
def cached_eval(config, expr, d):
if not hasattr(config, '_evalcache'):
config._evalcache = {}
try:
return config._evalcache[expr]
except KeyError:
import _pytest._code
exprcode = _pytest._code.compile(expr, mode="eval")
config._evalcache[expr] = x = eval(exprcode, d)
return x
def folded_skips(skipped):
d = {}
for event in skipped:
key = event.longrepr
assert len(key) == 3, (event, key)
keywords = getattr(event, 'keywords', {})
# folding reports with global pytestmark variable
# this is workaround, because for now we cannot identify the scope of a skip marker
# TODO: revisit after marks scope would be fixed
when = getattr(event, 'when', None)
if when == 'setup' and 'skip' in keywords and 'pytestmark' not in keywords:
key = (key[0], None, key[2])
d.setdefault(key, []).append(event)
l = []
values = []
for key, events in d.items():
l.append((len(events),) + key)
return l
values.append((len(events),) + key)
return values
def show_skipped(terminalreporter, lines):
tr = terminalreporter
skipped = tr.stats.get('skipped', [])
if skipped:
#if not tr.hasopt('skipped'):
# if not tr.hasopt('skipped'):
# tr.write_line(
# "%d skipped tests, specify -rs for more info" %
# len(skipped))
# return
fskips = folded_skips(skipped)
if fskips:
#tr.write_sep("_", "skipped test summary")
# tr.write_sep("_", "skipped test summary")
for num, fspath, lineno, reason in fskips:
if reason.startswith("Skipped: "):
reason = reason[9:]
lines.append(
"SKIP [%d] %s:%d: %s" %
(num, fspath, lineno, reason))
if lineno is not None:
lines.append(
"SKIP [%d] %s:%d: %s" %
(num, fspath, lineno + 1, reason))
else:
lines.append(
"SKIP [%d] %s: %s" %
(num, fspath, reason))
def shower(stat, format):
def show_(terminalreporter, lines):
return show_simple(terminalreporter, lines, stat, format)
return show_
REPORTCHAR_ACTIONS = {
'x': show_xfailed,
'X': show_xpassed,
'f': shower('failed', "FAIL %s"),
'F': shower('failed', "FAIL %s"),
's': show_skipped,
'S': show_skipped,
'p': shower('passed', "PASSED %s"),
'E': shower('error', "ERROR %s")
}

View File

@ -5,50 +5,96 @@ This is a good source for looking at the various reporting hooks.
from __future__ import absolute_import, division, print_function
import itertools
from _pytest.main import EXIT_OK, EXIT_TESTSFAILED, EXIT_INTERRUPTED, \
EXIT_USAGEERROR, EXIT_NOTESTSCOLLECTED
import pytest
import py
import platform
import sys
import time
import platform
import _pytest._pluggy as pluggy
import pluggy
import py
import six
from more_itertools import collapse
import pytest
from _pytest import nodes
from _pytest.main import EXIT_OK, EXIT_TESTSFAILED, EXIT_INTERRUPTED, \
EXIT_USAGEERROR, EXIT_NOTESTSCOLLECTED
import argparse
class MoreQuietAction(argparse.Action):
"""
a modified copy of the argparse count action which counts down and updates
the legacy quiet attribute at the same time
used to unify verbosity handling
"""
def __init__(self,
option_strings,
dest,
default=None,
required=False,
help=None):
super(MoreQuietAction, self).__init__(
option_strings=option_strings,
dest=dest,
nargs=0,
default=default,
required=required,
help=help)
def __call__(self, parser, namespace, values, option_string=None):
new_count = getattr(namespace, self.dest, 0) - 1
setattr(namespace, self.dest, new_count)
# todo Deprecate config.quiet
namespace.quiet = getattr(namespace, 'quiet', 0) + 1
def pytest_addoption(parser):
group = parser.getgroup("terminal reporting", "reporting", after="general")
group._addoption('-v', '--verbose', action="count",
dest="verbose", default=0, help="increase verbosity."),
group._addoption('-q', '--quiet', action="count",
dest="quiet", default=0, help="decrease verbosity."),
group._addoption('-v', '--verbose', action="count", default=0,
dest="verbose", help="increase verbosity."),
group._addoption('-q', '--quiet', action=MoreQuietAction, default=0,
dest="verbose", help="decrease verbosity."),
group._addoption("--verbosity", dest='verbose', type=int, default=0,
help="set verbosity")
group._addoption('-r',
action="store", dest="reportchars", default='', metavar="chars",
help="show extra test summary info as specified by chars (f)ailed, "
"(E)error, (s)skipped, (x)failed, (X)passed, "
"(p)passed, (P)passed with output, (a)all except pP. "
"Warnings are displayed at all times except when "
"--disable-warnings is set")
action="store", dest="reportchars", default='', metavar="chars",
help="show extra test summary info as specified by chars (f)ailed, "
"(E)error, (s)skipped, (x)failed, (X)passed, "
"(p)passed, (P)passed with output, (a)all except pP. "
"Warnings are displayed at all times except when "
"--disable-warnings is set")
group._addoption('--disable-warnings', '--disable-pytest-warnings', default=False,
dest='disable_warnings', action='store_true',
help='disable warnings summary')
group._addoption('-l', '--showlocals',
action="store_true", dest="showlocals", default=False,
help="show locals in tracebacks (disabled by default).")
action="store_true", dest="showlocals", default=False,
help="show locals in tracebacks (disabled by default).")
group._addoption('--tb', metavar="style",
action="store", dest="tbstyle", default='auto',
choices=['auto', 'long', 'short', 'no', 'line', 'native'],
help="traceback print mode (auto/long/short/line/native/no).")
action="store", dest="tbstyle", default='auto',
choices=['auto', 'long', 'short', 'no', 'line', 'native'],
help="traceback print mode (auto/long/short/line/native/no).")
group._addoption('--show-capture',
action="store", dest="showcapture",
choices=['no', 'stdout', 'stderr', 'log', 'all'], default='all',
help="Controls how captured stdout/stderr/log is shown on failed tests. "
"Default is 'all'.")
group._addoption('--fulltrace', '--full-trace',
action="store_true", default=False,
help="don't cut any tracebacks (default is to cut).")
action="store_true", default=False,
help="don't cut any tracebacks (default is to cut).")
group._addoption('--color', metavar="color",
action="store", dest="color", default='auto',
choices=['yes', 'no', 'auto'],
help="color terminal output (yes/no/auto).")
action="store", dest="color", default='auto',
choices=['yes', 'no', 'auto'],
help="color terminal output (yes/no/auto).")
parser.addini("console_output_style",
help="console output: classic or with additional progress information (classic|progress).",
default='progress')
def pytest_configure(config):
config.option.verbose -= config.option.quiet
reporter = TerminalReporter(config, sys.stdout)
config.pluginmanager.register(reporter, 'terminalreporter')
if config.option.debug or config.option.traceconfig:
@ -57,6 +103,7 @@ def pytest_configure(config):
reporter.write_line("[traceconfig] " + msg)
config.trace.root.setprocessor("pytest:config", mywriter)
def getreportopt(config):
reportopts = ""
reportchars = config.option.reportchars
@ -72,6 +119,7 @@ def getreportopt(config):
reportopts = 'fEsxXw'
return reportopts
def pytest_report_teststatus(report):
if report.passed:
letter = "."
@ -84,10 +132,11 @@ def pytest_report_teststatus(report):
return report.outcome, letter, report.outcome.upper()
class WarningReport:
class WarningReport(object):
"""
Simple structure to hold warnings information captured by ``pytest_logwarning``.
"""
def __init__(self, code, message, nodeid=None, fslocation=None):
"""
:param code: unused
@ -118,7 +167,7 @@ class WarningReport:
return None
class TerminalReporter:
class TerminalReporter(object):
def __init__(self, config, file=None):
import _pytest.config
self.config = config
@ -127,17 +176,32 @@ class TerminalReporter:
self.showfspath = self.verbosity >= 0
self.showlongtestinfo = self.verbosity > 0
self._numcollected = 0
self._session = None
self.stats = {}
self.startdir = py.path.local()
if file is None:
file = sys.stdout
self._tw = self.writer = _pytest.config.create_terminal_writer(config,
file)
self._tw = _pytest.config.create_terminal_writer(config, file)
# self.writer will be deprecated in pytest-3.4
self.writer = self._tw
self._screen_width = self._tw.fullwidth
self.currentfspath = None
self.reportchars = getreportopt(config)
self.hasmarkup = self._tw.hasmarkup
self.isatty = file.isatty()
self._progress_nodeids_reported = set()
self._show_progress_info = self._determine_show_progress_info()
def _determine_show_progress_info(self):
"""Return True if we should display progress information based on the current config"""
# do not show progress if we are not capturing output (#3038)
if self.config.getoption('capture') == 'no':
return False
# do not show progress if we are showing fixture setup/teardown
if self.config.getoption('setupshow'):
return False
return self.config.getini('console_output_style') == 'progress'
def hasopt(self, char):
char = {'xfailed': 'x', 'skipped': 's'}.get(char, char)
@ -146,6 +210,8 @@ class TerminalReporter:
def write_fspath_result(self, nodeid, res):
fspath = self.config.rootdir.join(nodeid.split("::")[0])
if fspath != self.currentfspath:
if self.currentfspath is not None:
self._write_progress_information_filling_space()
self.currentfspath = fspath
fspath = self.startdir.bestrelpath(fspath)
self._tw.line()
@ -170,14 +236,28 @@ class TerminalReporter:
self._tw.write(content, **markup)
def write_line(self, line, **markup):
if not py.builtin._istext(line):
line = py.builtin.text(line, errors="replace")
if not isinstance(line, six.text_type):
line = six.text_type(line, errors="replace")
self.ensure_newline()
self._tw.line(line, **markup)
def rewrite(self, line, **markup):
"""
Rewinds the terminal cursor to the beginning and writes the given line.
:kwarg erase: if True, will also add spaces until the full terminal width to ensure
previous lines are properly erased.
The rest of the keyword arguments are markup instructions.
"""
erase = markup.pop('erase', False)
if erase:
fill_count = self._tw.fullwidth - len(line) - 1
fill = ' ' * fill_count
else:
fill = ''
line = str(line)
self._tw.write("\r" + line, **markup)
self._tw.write("\r" + line + fill, **markup)
def write_sep(self, sep, title=None, **markup):
self.ensure_newline()
@ -190,7 +270,7 @@ class TerminalReporter:
self._tw.line(msg, **kw)
def pytest_internalerror(self, excrepr):
for line in py.builtin.text(excrepr).split("\n"):
for line in six.text_type(excrepr).split("\n"):
self.write_line("INTERNALERROR> " + line)
return 1
@ -225,38 +305,76 @@ class TerminalReporter:
rep = report
res = self.config.hook.pytest_report_teststatus(report=rep)
cat, letter, word = res
if isinstance(word, tuple):
word, markup = word
else:
markup = None
self.stats.setdefault(cat, []).append(rep)
self._tests_ran = True
if not letter and not word:
# probably passed setup/teardown
return
running_xdist = hasattr(rep, 'node')
if self.verbosity <= 0:
if not hasattr(rep, 'node') and self.showfspath:
if not running_xdist and self.showfspath:
self.write_fspath_result(rep.nodeid, letter)
else:
self._tw.write(letter)
else:
if isinstance(word, tuple):
word, markup = word
else:
self._progress_nodeids_reported.add(rep.nodeid)
if markup is None:
if rep.passed:
markup = {'green':True}
markup = {'green': True}
elif rep.failed:
markup = {'red':True}
markup = {'red': True}
elif rep.skipped:
markup = {'yellow':True}
markup = {'yellow': True}
else:
markup = {}
line = self._locationline(rep.nodeid, *rep.location)
if not hasattr(rep, 'node'):
if not running_xdist:
self.write_ensure_prefix(line, word, **markup)
#self._tw.write(word, **markup)
if self._show_progress_info:
self._write_progress_information_filling_space()
else:
self.ensure_newline()
if hasattr(rep, 'node'):
self._tw.write("[%s] " % rep.node.gateway.id)
self._tw.write("[%s]" % rep.node.gateway.id)
if self._show_progress_info:
self._tw.write(self._get_progress_information_message() + " ", cyan=True)
else:
self._tw.write(' ')
self._tw.write(word, **markup)
self._tw.write(" " + line)
self.currentfspath = -2
def pytest_runtest_logfinish(self, nodeid):
if self.verbosity <= 0 and self._show_progress_info:
self._progress_nodeids_reported.add(nodeid)
last_item = len(self._progress_nodeids_reported) == self._session.testscollected
if last_item:
self._write_progress_information_filling_space()
else:
past_edge = self._tw.chars_on_current_line + self._PROGRESS_LENGTH + 1 >= self._screen_width
if past_edge:
msg = self._get_progress_information_message()
self._tw.write(msg + '\n', cyan=True)
_PROGRESS_LENGTH = len(' [100%]')
def _get_progress_information_message(self):
if self.config.getoption('capture') == 'no':
return ''
collected = self._session.testscollected
if collected:
progress = len(self._progress_nodeids_reported) * 100 // collected
return ' [{:3d}%]'.format(progress)
return ' [100%]'
def _write_progress_information_filling_space(self):
msg = self._get_progress_information_message()
fill = ' ' * (self._tw.fullwidth - self._tw.chars_on_current_line - len(msg) - 1)
self.write(fill + msg, cyan=True)
def pytest_collection(self):
if not self.isatty and self.config.option.verbose >= 1:
self.write("collecting ... ", bold=True)
@ -269,7 +387,7 @@ class TerminalReporter:
items = [x for x in report.result if isinstance(x, pytest.Item)]
self._numcollected += len(items)
if self.isatty:
#self.write_fspath_result(report.nodeid, 'E')
# self.write_fspath_result(report.nodeid, 'E')
self.report_collect()
def report_collect(self, final=False):
@ -278,6 +396,7 @@ class TerminalReporter:
errors = len(self.stats.get('error', []))
skipped = len(self.stats.get('skipped', []))
deselected = len(self.stats.get('deselected', []))
if final:
line = "collected "
else:
@ -285,20 +404,24 @@ class TerminalReporter:
line += str(self._numcollected) + " item" + ('' if self._numcollected == 1 else 's')
if errors:
line += " / %d errors" % errors
if deselected:
line += " / %d deselected" % deselected
if skipped:
line += " / %d skipped" % skipped
if self.isatty:
self.rewrite(line, bold=True, erase=True)
if final:
line += " \n"
self.rewrite(line, bold=True)
self.write('\n')
else:
self.write_line(line)
@pytest.hookimpl(trylast=True)
def pytest_collection_modifyitems(self):
self.report_collect(True)
@pytest.hookimpl(trylast=True)
def pytest_sessionstart(self, session):
self._session = session
self._sessionstarttime = time.time()
if not self.showheader:
return
@ -316,8 +439,11 @@ class TerminalReporter:
self.write_line(msg)
lines = self.config.hook.pytest_report_header(
config=self.config, startdir=self.startdir)
self._write_report_lines_from_hooks(lines)
def _write_report_lines_from_hooks(self, lines):
lines.reverse()
for line in flatten(lines):
for line in collapse(lines):
self.write_line(line)
def pytest_report_header(self, config):
@ -342,10 +468,9 @@ class TerminalReporter:
rep.toterminal(self._tw)
return 1
return 0
if not self.showheader:
return
#for i, testarg in enumerate(self.config.args):
# self.write_line("test path %d: %s" %(i+1, testarg))
lines = self.config.hook.pytest_report_collectionfinish(
config=self.config, startdir=self.startdir, items=session.items)
self._write_report_lines_from_hooks(lines)
def _printcollecteditems(self, items):
# to print out items and their parent collectors
@ -368,14 +493,14 @@ class TerminalReporter:
stack = []
indent = ""
for item in items:
needed_collectors = item.listchain()[1:] # strip root node
needed_collectors = item.listchain()[1:] # strip root node
while stack:
if stack == needed_collectors[:len(stack)]:
break
stack.pop()
for col in needed_collectors[len(stack):]:
stack.append(col)
#if col.name == "()":
# if col.name == "()":
# continue
indent = (len(stack) - 1) * " "
self._tw.line("%s%s" % (indent, col))
@ -391,16 +516,19 @@ class TerminalReporter:
if exitstatus in summary_exit_codes:
self.config.hook.pytest_terminal_summary(terminalreporter=self,
exitstatus=exitstatus)
self.summary_errors()
self.summary_failures()
self.summary_warnings()
self.summary_passes()
if exitstatus == EXIT_INTERRUPTED:
self._report_keyboardinterrupt()
del self._keyboardinterrupt_memo
self.summary_deselected()
self.summary_stats()
@pytest.hookimpl(hookwrapper=True)
def pytest_terminal_summary(self):
self.summary_errors()
self.summary_failures()
yield
self.summary_warnings()
self.summary_passes()
def pytest_keyboard_interrupt(self, excinfo):
self._keyboardinterrupt_memo = excinfo.getrepr(funcargs=True)
@ -424,15 +552,15 @@ class TerminalReporter:
line = self.config.cwd_relative_nodeid(nodeid)
if domain and line.endswith(domain):
line = line[:-len(domain)]
l = domain.split("[")
l[0] = l[0].replace('.', '::') # don't replace '.' in params
line += "[".join(l)
values = domain.split("[")
values[0] = values[0].replace('.', '::') # don't replace '.' in params
line += "[".join(values)
return line
# collect_fspath comes from testid which has a "/"-normalized path
if fspath:
res = mkrel(nodeid).replace("::()", "") # parens-normalization
if nodeid.split("::")[0] != fspath.replace("\\", "/"):
if nodeid.split("::")[0] != fspath.replace("\\", nodes.SEP):
res += " <- " + self.startdir.bestrelpath(fspath)
else:
res = "[location]"
@ -443,7 +571,7 @@ class TerminalReporter:
fspath, lineno, domain = rep.location
return domain
else:
return "test session" # XXX?
return "test session" # XXX?
def _getcrashline(self, rep):
try:
@ -458,11 +586,11 @@ class TerminalReporter:
# summaries for sessionfinish
#
def getreports(self, name):
l = []
values = []
for x in self.stats.get(name, []):
if not hasattr(x, '_pdbshown'):
l.append(x)
return l
values.append(x)
return values
def summary_warnings(self):
if self.hasopt("w"):
@ -473,9 +601,9 @@ class TerminalReporter:
grouped = itertools.groupby(all_warnings, key=lambda wr: wr.get_location(self.config))
self.write_sep("=", "warnings summary", yellow=True, bold=False)
for location, warnings in grouped:
for location, warning_records in grouped:
self._tw.line(str(location) or '<undetermined location>')
for w in warnings:
for w in warning_records:
lines = w.message.splitlines()
indented = '\n'.join(' ' + x for x in lines)
self._tw.line(indented)
@ -502,7 +630,6 @@ class TerminalReporter:
content = content[:-1]
self._tw.line(content)
def summary_failures(self):
if self.config.option.tbstyle != "no":
reports = self.getreports('failed')
@ -542,7 +669,12 @@ class TerminalReporter:
def _outrep_summary(self, rep):
rep.toterminal(self._tw)
showcapture = self.config.option.showcapture
if showcapture == 'no':
return
for secname, content in rep.sections:
if showcapture != 'all' and showcapture not in secname:
continue
self._tw.sep("-", secname)
if content[-1:] == "\n":
content = content[:-1]
@ -559,10 +691,6 @@ class TerminalReporter:
if self.verbosity == -1:
self.write_line(msg, **markup)
def summary_deselected(self):
if 'deselected' in self.stats:
self.write_sep("=", "%d tests deselected" % (
len(self.stats['deselected'])), bold=True)
def repr_pythonversion(v=None):
if v is None:
@ -572,13 +700,6 @@ def repr_pythonversion(v=None):
except (TypeError, ValueError):
return str(v)
def flatten(l):
for x in l:
if isinstance(x, (list, tuple)):
for y in flatten(x):
yield y
else:
yield x
def build_summary_stats_line(stats):
keys = ("failed passed skipped deselected "
@ -586,7 +707,7 @@ def build_summary_stats_line(stats):
unknown_key_seen = False
for key in stats.keys():
if key not in keys:
if key: # setup/teardown reports have an empty key, ignore them
if key: # setup/teardown reports have an empty key, ignore them
keys.append(key)
unknown_key_seen = True
parts = []
@ -613,7 +734,7 @@ def build_summary_stats_line(stats):
def _plugin_nameversions(plugininfo):
l = []
values = []
for plugin, dist in plugininfo:
# gets us name and version!
name = '{dist.project_name}-{dist.version}'.format(dist=dist)
@ -622,6 +743,6 @@ def _plugin_nameversions(plugininfo):
name = name[7:]
# we decided to print python package names
# they can have more than one plugin
if name not in l:
l.append(name)
return l
if name not in values:
values.append(name)
return values

View File

@ -8,7 +8,7 @@ import py
from _pytest.monkeypatch import MonkeyPatch
class TempdirFactory:
class TempdirFactory(object):
"""Factory for temporary directories under the common base temp directory.
The base directory can be configured using the ``--basetemp`` option.
@ -25,7 +25,7 @@ class TempdirFactory:
provides an empty unique-per-test-invocation directory
and is guaranteed to be empty.
"""
#py.log._apiwarn(">1.1", "use tmpdir function argument")
# py.log._apiwarn(">1.1", "use tmpdir function argument")
return self.getbasetemp().ensure(string, dir=dir)
def mktemp(self, basename, numbered=True):
@ -38,7 +38,7 @@ class TempdirFactory:
p = basetemp.mkdir(basename)
else:
p = py.path.local.make_numbered_dir(prefix=basename,
keep=0, rootdir=basetemp, lock_timeout=None)
keep=0, rootdir=basetemp, lock_timeout=None)
self.trace("mktemp", p)
return p
@ -116,6 +116,8 @@ def tmpdir(request, tmpdir_factory):
created as a sub directory of the base temporary
directory. The returned object is a `py.path.local`_
path object.
.. _`py.path.local`: https://py.readthedocs.io/en/latest/path.html
"""
name = request.node.name
name = re.sub(r"[\W]", "_", name)

View File

@ -7,9 +7,8 @@ import traceback
# for transferring markers
import _pytest._code
from _pytest.config import hookimpl
from _pytest.runner import fail, skip
from _pytest.outcomes import fail, skip, xfail
from _pytest.python import transfer_markers, Class, Module, Function
from _pytest.skipping import MarkEvaluator, xfail
def pytest_pycollect_makeitem(collector, name, obj):
@ -109,13 +108,13 @@ class TestCaseFunction(Function):
except TypeError:
try:
try:
l = traceback.format_exception(*rawexcinfo)
l.insert(0, "NOTE: Incompatible Exception Representation, "
"displaying natively:\n\n")
fail("".join(l), pytrace=False)
values = traceback.format_exception(*rawexcinfo)
values.insert(0, "NOTE: Incompatible Exception Representation, "
"displaying natively:\n\n")
fail("".join(values), pytrace=False)
except (fail.Exception, KeyboardInterrupt):
raise
except:
except: # noqa
fail("ERROR: Unknown Incompatible Exception "
"representation:\n%r" % (rawexcinfo,), pytrace=False)
except KeyboardInterrupt:
@ -134,8 +133,7 @@ class TestCaseFunction(Function):
try:
skip(reason)
except skip.Exception:
self._evalskip = MarkEvaluator(self, 'SkipTest')
self._evalskip.result = True
self._skipped_by_mark = True
self._addexcinfo(sys.exc_info())
def addExpectedFailure(self, testcase, rawexcinfo, reason=""):
@ -158,7 +156,7 @@ class TestCaseFunction(Function):
# analog to pythons Lib/unittest/case.py:run
testMethod = getattr(self._testcase, self._testcase._testMethodName)
if (getattr(self._testcase.__class__, "__unittest_skip__", False) or
getattr(testMethod, "__unittest_skip__", False)):
getattr(testMethod, "__unittest_skip__", False)):
# If the class or method was skipped.
skip_why = (getattr(self._testcase.__class__, '__unittest_skip_why__', '') or
getattr(testMethod, '__unittest_skip_why__', ''))
@ -210,7 +208,7 @@ def pytest_runtest_protocol(item):
check_testcase_implements_trial_reporter()
def excstore(self, exc_value=None, exc_type=None, exc_tb=None,
captureVars=None):
captureVars=None):
if exc_value is None:
self._rawexcinfo = sys.exc_info()
else:
@ -219,7 +217,7 @@ def pytest_runtest_protocol(item):
self._rawexcinfo = (exc_type, exc_value, exc_tb)
try:
Failure__init__(self, exc_value, exc_type, exc_tb,
captureVars=captureVars)
captureVars=captureVars)
except TypeError:
Failure__init__(self, exc_value, exc_type, exc_tb)

View File

@ -1,13 +0,0 @@
This directory vendors the `pluggy` module.
For a more detailed discussion for the reasons to vendoring this
package, please see [this issue](https://github.com/pytest-dev/pytest/issues/944).
To update the current version, execute:
```
$ pip install -U pluggy==<version> --no-compile --target=_pytest/vendored_packages
```
And commit the modified files. The `pluggy-<version>.dist-info` directory
created by `pip` should be added as well.

View File

@ -1,11 +0,0 @@
Plugin registration and hook calling for Python
===============================================
This is the plugin manager as used by pytest but stripped
of pytest specific details.
During the 0.x series this plugin does not have much documentation
except extensive docstrings in the pluggy.py module.

View File

@ -1,22 +0,0 @@
The MIT License (MIT)
Copyright (c) 2015 holger krekel (rather uses bitbucket/hpk42)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,40 +0,0 @@
Metadata-Version: 2.0
Name: pluggy
Version: 0.4.0
Summary: plugin and hook calling mechanisms for python
Home-page: https://github.com/pytest-dev/pluggy
Author: Holger Krekel
Author-email: holger at merlinux.eu
License: MIT license
Platform: unix
Platform: linux
Platform: osx
Platform: win32
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: POSIX
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Topic :: Software Development :: Testing
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: Utilities
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Plugin registration and hook calling for Python
===============================================
This is the plugin manager as used by pytest but stripped
of pytest specific details.
During the 0.x series this plugin does not have much documentation
except extensive docstrings in the pluggy.py module.

View File

@ -1,9 +0,0 @@
pluggy.py,sha256=u0oG9cv-oLOkNvEBlwnnu8pp1AyxpoERgUO00S3rvpQ,31543
pluggy-0.4.0.dist-info/DESCRIPTION.rst,sha256=ltvjkFd40LW_xShthp6RRVM6OB_uACYDFR3kTpKw7o4,307
pluggy-0.4.0.dist-info/LICENSE.txt,sha256=ruwhUOyV1HgE9F35JVL9BCZ9vMSALx369I4xq9rhpkM,1134
pluggy-0.4.0.dist-info/METADATA,sha256=pe2hbsqKFaLHC6wAQPpFPn0KlpcPfLBe_BnS4O70bfk,1364
pluggy-0.4.0.dist-info/RECORD,,
pluggy-0.4.0.dist-info/WHEEL,sha256=9Z5Xm-eel1bTS7e6ogYiKz0zmPEqDwIypurdHN1hR40,116
pluggy-0.4.0.dist-info/metadata.json,sha256=T3go5L2qOa_-H-HpCZi3EoVKb8sZ3R-fOssbkWo2nvM,1119
pluggy-0.4.0.dist-info/top_level.txt,sha256=xKSCRhai-v9MckvMuWqNz16c1tbsmOggoMSwTgcpYHE,7
pluggy-0.4.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4

View File

@ -1,6 +0,0 @@
Wheel-Version: 1.0
Generator: bdist_wheel (0.29.0)
Root-Is-Purelib: true
Tag: py2-none-any
Tag: py3-none-any

View File

@ -1 +0,0 @@
{"classifiers": ["Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: POSIX", "Operating System :: Microsoft :: Windows", "Operating System :: MacOS :: MacOS X", "Topic :: Software Development :: Testing", "Topic :: Software Development :: Libraries", "Topic :: Utilities", "Programming Language :: Python :: 2", "Programming Language :: Python :: 2.6", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.3", "Programming Language :: Python :: 3.4", "Programming Language :: Python :: 3.5"], "extensions": {"python.details": {"contacts": [{"email": "holger at merlinux.eu", "name": "Holger Krekel", "role": "author"}], "document_names": {"description": "DESCRIPTION.rst", "license": "LICENSE.txt"}, "project_urls": {"Home": "https://github.com/pytest-dev/pluggy"}}}, "generator": "bdist_wheel (0.29.0)", "license": "MIT license", "metadata_version": "2.0", "name": "pluggy", "platform": "unix", "summary": "plugin and hook calling mechanisms for python", "version": "0.4.0"}

View File

@ -1,802 +0,0 @@
"""
PluginManager, basic initialization and tracing.
pluggy is the cristallized core of plugin management as used
by some 150 plugins for pytest.
Pluggy uses semantic versioning. Breaking changes are only foreseen for
Major releases (incremented X in "X.Y.Z"). If you want to use pluggy in
your project you should thus use a dependency restriction like
"pluggy>=0.1.0,<1.0" to avoid surprises.
pluggy is concerned with hook specification, hook implementations and hook
calling. For any given hook specification a hook call invokes up to N implementations.
A hook implementation can influence its position and type of execution:
if attributed "tryfirst" or "trylast" it will be tried to execute
first or last. However, if attributed "hookwrapper" an implementation
can wrap all calls to non-hookwrapper implementations. A hookwrapper
can thus execute some code ahead and after the execution of other hooks.
Hook specification is done by way of a regular python function where
both the function name and the names of all its arguments are significant.
Each hook implementation function is verified against the original specification
function, including the names of all its arguments. To allow for hook specifications
to evolve over the livetime of a project, hook implementations can
accept less arguments. One can thus add new arguments and semantics to
a hook specification by adding another argument typically without breaking
existing hook implementations.
The chosen approach is meant to let a hook designer think carefuly about
which objects are needed by an extension writer. By contrast, subclass-based
extension mechanisms often expose a lot more state and behaviour than needed,
thus restricting future developments.
Pluggy currently consists of functionality for:
- a way to register new hook specifications. Without a hook
specification no hook calling can be performed.
- a registry of plugins which contain hook implementation functions. It
is possible to register plugins for which a hook specification is not yet
known and validate all hooks when the system is in a more referentially
consistent state. Setting an "optionalhook" attribution to a hook
implementation will avoid PluginValidationError's if a specification
is missing. This allows to have optional integration between plugins.
- a "hook" relay object from which you can launch 1:N calls to
registered hook implementation functions
- a mechanism for ordering hook implementation functions
- mechanisms for two different type of 1:N calls: "firstresult" for when
the call should stop when the first implementation returns a non-None result.
And the other (default) way of guaranteeing that all hook implementations
will be called and their non-None result collected.
- mechanisms for "historic" extension points such that all newly
registered functions will receive all hook calls that happened
before their registration.
- a mechanism for discovering plugin objects which are based on
setuptools based entry points.
- a simple tracing mechanism, including tracing of plugin calls and
their arguments.
"""
import sys
import inspect
__version__ = '0.4.0'
__all__ = ["PluginManager", "PluginValidationError", "HookCallError",
"HookspecMarker", "HookimplMarker"]
_py3 = sys.version_info > (3, 0)
class HookspecMarker:
""" Decorator helper class for marking functions as hook specifications.
You can instantiate it with a project_name to get a decorator.
Calling PluginManager.add_hookspecs later will discover all marked functions
if the PluginManager uses the same project_name.
"""
def __init__(self, project_name):
self.project_name = project_name
def __call__(self, function=None, firstresult=False, historic=False):
""" if passed a function, directly sets attributes on the function
which will make it discoverable to add_hookspecs(). If passed no
function, returns a decorator which can be applied to a function
later using the attributes supplied.
If firstresult is True the 1:N hook call (N being the number of registered
hook implementation functions) will stop at I<=N when the I'th function
returns a non-None result.
If historic is True calls to a hook will be memorized and replayed
on later registered plugins.
"""
def setattr_hookspec_opts(func):
if historic and firstresult:
raise ValueError("cannot have a historic firstresult hook")
setattr(func, self.project_name + "_spec",
dict(firstresult=firstresult, historic=historic))
return func
if function is not None:
return setattr_hookspec_opts(function)
else:
return setattr_hookspec_opts
class HookimplMarker:
""" Decorator helper class for marking functions as hook implementations.
You can instantiate with a project_name to get a decorator.
Calling PluginManager.register later will discover all marked functions
if the PluginManager uses the same project_name.
"""
def __init__(self, project_name):
self.project_name = project_name
def __call__(self, function=None, hookwrapper=False, optionalhook=False,
tryfirst=False, trylast=False):
""" if passed a function, directly sets attributes on the function
which will make it discoverable to register(). If passed no function,
returns a decorator which can be applied to a function later using
the attributes supplied.
If optionalhook is True a missing matching hook specification will not result
in an error (by default it is an error if no matching spec is found).
If tryfirst is True this hook implementation will run as early as possible
in the chain of N hook implementations for a specfication.
If trylast is True this hook implementation will run as late as possible
in the chain of N hook implementations.
If hookwrapper is True the hook implementations needs to execute exactly
one "yield". The code before the yield is run early before any non-hookwrapper
function is run. The code after the yield is run after all non-hookwrapper
function have run. The yield receives an ``_CallOutcome`` object representing
the exception or result outcome of the inner calls (including other hookwrapper
calls).
"""
def setattr_hookimpl_opts(func):
setattr(func, self.project_name + "_impl",
dict(hookwrapper=hookwrapper, optionalhook=optionalhook,
tryfirst=tryfirst, trylast=trylast))
return func
if function is None:
return setattr_hookimpl_opts
else:
return setattr_hookimpl_opts(function)
def normalize_hookimpl_opts(opts):
opts.setdefault("tryfirst", False)
opts.setdefault("trylast", False)
opts.setdefault("hookwrapper", False)
opts.setdefault("optionalhook", False)
class _TagTracer:
def __init__(self):
self._tag2proc = {}
self.writer = None
self.indent = 0
def get(self, name):
return _TagTracerSub(self, (name,))
def format_message(self, tags, args):
if isinstance(args[-1], dict):
extra = args[-1]
args = args[:-1]
else:
extra = {}
content = " ".join(map(str, args))
indent = " " * self.indent
lines = [
"%s%s [%s]\n" % (indent, content, ":".join(tags))
]
for name, value in extra.items():
lines.append("%s %s: %s\n" % (indent, name, value))
return lines
def processmessage(self, tags, args):
if self.writer is not None and args:
lines = self.format_message(tags, args)
self.writer(''.join(lines))
try:
self._tag2proc[tags](tags, args)
except KeyError:
pass
def setwriter(self, writer):
self.writer = writer
def setprocessor(self, tags, processor):
if isinstance(tags, str):
tags = tuple(tags.split(":"))
else:
assert isinstance(tags, tuple)
self._tag2proc[tags] = processor
class _TagTracerSub:
def __init__(self, root, tags):
self.root = root
self.tags = tags
def __call__(self, *args):
self.root.processmessage(self.tags, args)
def setmyprocessor(self, processor):
self.root.setprocessor(self.tags, processor)
def get(self, name):
return self.__class__(self.root, self.tags + (name,))
def _raise_wrapfail(wrap_controller, msg):
co = wrap_controller.gi_code
raise RuntimeError("wrap_controller at %r %s:%d %s" %
(co.co_name, co.co_filename, co.co_firstlineno, msg))
def _wrapped_call(wrap_controller, func):
""" Wrap calling to a function with a generator which needs to yield
exactly once. The yield point will trigger calling the wrapped function
and return its _CallOutcome to the yield point. The generator then needs
to finish (raise StopIteration) in order for the wrapped call to complete.
"""
try:
next(wrap_controller) # first yield
except StopIteration:
_raise_wrapfail(wrap_controller, "did not yield")
call_outcome = _CallOutcome(func)
try:
wrap_controller.send(call_outcome)
_raise_wrapfail(wrap_controller, "has second yield")
except StopIteration:
pass
return call_outcome.get_result()
class _CallOutcome:
""" Outcome of a function call, either an exception or a proper result.
Calling the ``get_result`` method will return the result or reraise
the exception raised when the function was called. """
excinfo = None
def __init__(self, func):
try:
self.result = func()
except BaseException:
self.excinfo = sys.exc_info()
def force_result(self, result):
self.result = result
self.excinfo = None
def get_result(self):
if self.excinfo is None:
return self.result
else:
ex = self.excinfo
if _py3:
raise ex[1].with_traceback(ex[2])
_reraise(*ex) # noqa
if not _py3:
exec("""
def _reraise(cls, val, tb):
raise cls, val, tb
""")
class _TracedHookExecution:
def __init__(self, pluginmanager, before, after):
self.pluginmanager = pluginmanager
self.before = before
self.after = after
self.oldcall = pluginmanager._inner_hookexec
assert not isinstance(self.oldcall, _TracedHookExecution)
self.pluginmanager._inner_hookexec = self
def __call__(self, hook, hook_impls, kwargs):
self.before(hook.name, hook_impls, kwargs)
outcome = _CallOutcome(lambda: self.oldcall(hook, hook_impls, kwargs))
self.after(outcome, hook.name, hook_impls, kwargs)
return outcome.get_result()
def undo(self):
self.pluginmanager._inner_hookexec = self.oldcall
class PluginManager(object):
""" Core Pluginmanager class which manages registration
of plugin objects and 1:N hook calling.
You can register new hooks by calling ``add_hookspec(module_or_class)``.
You can register plugin objects (which contain hooks) by calling
``register(plugin)``. The Pluginmanager is initialized with a
prefix that is searched for in the names of the dict of registered
plugin objects. An optional excludefunc allows to blacklist names which
are not considered as hooks despite a matching prefix.
For debugging purposes you can call ``enable_tracing()``
which will subsequently send debug information to the trace helper.
"""
def __init__(self, project_name, implprefix=None):
""" if implprefix is given implementation functions
will be recognized if their name matches the implprefix. """
self.project_name = project_name
self._name2plugin = {}
self._plugin2hookcallers = {}
self._plugin_distinfo = []
self.trace = _TagTracer().get("pluginmanage")
self.hook = _HookRelay(self.trace.root.get("hook"))
self._implprefix = implprefix
self._inner_hookexec = lambda hook, methods, kwargs: \
_MultiCall(methods, kwargs, hook.spec_opts).execute()
def _hookexec(self, hook, methods, kwargs):
# called from all hookcaller instances.
# enable_tracing will set its own wrapping function at self._inner_hookexec
return self._inner_hookexec(hook, methods, kwargs)
def register(self, plugin, name=None):
""" Register a plugin and return its canonical name or None if the name
is blocked from registering. Raise a ValueError if the plugin is already
registered. """
plugin_name = name or self.get_canonical_name(plugin)
if plugin_name in self._name2plugin or plugin in self._plugin2hookcallers:
if self._name2plugin.get(plugin_name, -1) is None:
return # blocked plugin, return None to indicate no registration
raise ValueError("Plugin already registered: %s=%s\n%s" %
(plugin_name, plugin, self._name2plugin))
# XXX if an error happens we should make sure no state has been
# changed at point of return
self._name2plugin[plugin_name] = plugin
# register matching hook implementations of the plugin
self._plugin2hookcallers[plugin] = hookcallers = []
for name in dir(plugin):
hookimpl_opts = self.parse_hookimpl_opts(plugin, name)
if hookimpl_opts is not None:
normalize_hookimpl_opts(hookimpl_opts)
method = getattr(plugin, name)
hookimpl = HookImpl(plugin, plugin_name, method, hookimpl_opts)
hook = getattr(self.hook, name, None)
if hook is None:
hook = _HookCaller(name, self._hookexec)
setattr(self.hook, name, hook)
elif hook.has_spec():
self._verify_hook(hook, hookimpl)
hook._maybe_apply_history(hookimpl)
hook._add_hookimpl(hookimpl)
hookcallers.append(hook)
return plugin_name
def parse_hookimpl_opts(self, plugin, name):
method = getattr(plugin, name)
try:
res = getattr(method, self.project_name + "_impl", None)
except Exception:
res = {}
if res is not None and not isinstance(res, dict):
# false positive
res = None
elif res is None and self._implprefix and name.startswith(self._implprefix):
res = {}
return res
def unregister(self, plugin=None, name=None):
""" unregister a plugin object and all its contained hook implementations
from internal data structures. """
if name is None:
assert plugin is not None, "one of name or plugin needs to be specified"
name = self.get_name(plugin)
if plugin is None:
plugin = self.get_plugin(name)
# if self._name2plugin[name] == None registration was blocked: ignore
if self._name2plugin.get(name):
del self._name2plugin[name]
for hookcaller in self._plugin2hookcallers.pop(plugin, []):
hookcaller._remove_plugin(plugin)
return plugin
def set_blocked(self, name):
""" block registrations of the given name, unregister if already registered. """
self.unregister(name=name)
self._name2plugin[name] = None
def is_blocked(self, name):
""" return True if the name blogs registering plugins of that name. """
return name in self._name2plugin and self._name2plugin[name] is None
def add_hookspecs(self, module_or_class):
""" add new hook specifications defined in the given module_or_class.
Functions are recognized if they have been decorated accordingly. """
names = []
for name in dir(module_or_class):
spec_opts = self.parse_hookspec_opts(module_or_class, name)
if spec_opts is not None:
hc = getattr(self.hook, name, None)
if hc is None:
hc = _HookCaller(name, self._hookexec, module_or_class, spec_opts)
setattr(self.hook, name, hc)
else:
# plugins registered this hook without knowing the spec
hc.set_specification(module_or_class, spec_opts)
for hookfunction in (hc._wrappers + hc._nonwrappers):
self._verify_hook(hc, hookfunction)
names.append(name)
if not names:
raise ValueError("did not find any %r hooks in %r" %
(self.project_name, module_or_class))
def parse_hookspec_opts(self, module_or_class, name):
method = getattr(module_or_class, name)
return getattr(method, self.project_name + "_spec", None)
def get_plugins(self):
""" return the set of registered plugins. """
return set(self._plugin2hookcallers)
def is_registered(self, plugin):
""" Return True if the plugin is already registered. """
return plugin in self._plugin2hookcallers
def get_canonical_name(self, plugin):
""" Return canonical name for a plugin object. Note that a plugin
may be registered under a different name which was specified
by the caller of register(plugin, name). To obtain the name
of an registered plugin use ``get_name(plugin)`` instead."""
return getattr(plugin, "__name__", None) or str(id(plugin))
def get_plugin(self, name):
""" Return a plugin or None for the given name. """
return self._name2plugin.get(name)
def has_plugin(self, name):
""" Return True if a plugin with the given name is registered. """
return self.get_plugin(name) is not None
def get_name(self, plugin):
""" Return name for registered plugin or None if not registered. """
for name, val in self._name2plugin.items():
if plugin == val:
return name
def _verify_hook(self, hook, hookimpl):
if hook.is_historic() and hookimpl.hookwrapper:
raise PluginValidationError(
"Plugin %r\nhook %r\nhistoric incompatible to hookwrapper" %
(hookimpl.plugin_name, hook.name))
for arg in hookimpl.argnames:
if arg not in hook.argnames:
raise PluginValidationError(
"Plugin %r\nhook %r\nargument %r not available\n"
"plugin definition: %s\n"
"available hookargs: %s" %
(hookimpl.plugin_name, hook.name, arg,
_formatdef(hookimpl.function), ", ".join(hook.argnames)))
def check_pending(self):
""" Verify that all hooks which have not been verified against
a hook specification are optional, otherwise raise PluginValidationError"""
for name in self.hook.__dict__:
if name[0] != "_":
hook = getattr(self.hook, name)
if not hook.has_spec():
for hookimpl in (hook._wrappers + hook._nonwrappers):
if not hookimpl.optionalhook:
raise PluginValidationError(
"unknown hook %r in plugin %r" %
(name, hookimpl.plugin))
def load_setuptools_entrypoints(self, entrypoint_name):
""" Load modules from querying the specified setuptools entrypoint name.
Return the number of loaded plugins. """
from pkg_resources import (iter_entry_points, DistributionNotFound,
VersionConflict)
for ep in iter_entry_points(entrypoint_name):
# is the plugin registered or blocked?
if self.get_plugin(ep.name) or self.is_blocked(ep.name):
continue
try:
plugin = ep.load()
except DistributionNotFound:
continue
except VersionConflict as e:
raise PluginValidationError(
"Plugin %r could not be loaded: %s!" % (ep.name, e))
self.register(plugin, name=ep.name)
self._plugin_distinfo.append((plugin, ep.dist))
return len(self._plugin_distinfo)
def list_plugin_distinfo(self):
""" return list of distinfo/plugin tuples for all setuptools registered
plugins. """
return list(self._plugin_distinfo)
def list_name_plugin(self):
""" return list of name/plugin pairs. """
return list(self._name2plugin.items())
def get_hookcallers(self, plugin):
""" get all hook callers for the specified plugin. """
return self._plugin2hookcallers.get(plugin)
def add_hookcall_monitoring(self, before, after):
""" add before/after tracing functions for all hooks
and return an undo function which, when called,
will remove the added tracers.
``before(hook_name, hook_impls, kwargs)`` will be called ahead
of all hook calls and receive a hookcaller instance, a list
of HookImpl instances and the keyword arguments for the hook call.
``after(outcome, hook_name, hook_impls, kwargs)`` receives the
same arguments as ``before`` but also a :py:class:`_CallOutcome <_pytest.vendored_packages.pluggy._CallOutcome>` object
which represents the result of the overall hook call.
"""
return _TracedHookExecution(self, before, after).undo
def enable_tracing(self):
""" enable tracing of hook calls and return an undo function. """
hooktrace = self.hook._trace
def before(hook_name, methods, kwargs):
hooktrace.root.indent += 1
hooktrace(hook_name, kwargs)
def after(outcome, hook_name, methods, kwargs):
if outcome.excinfo is None:
hooktrace("finish", hook_name, "-->", outcome.result)
hooktrace.root.indent -= 1
return self.add_hookcall_monitoring(before, after)
def subset_hook_caller(self, name, remove_plugins):
""" Return a new _HookCaller instance for the named method
which manages calls to all registered plugins except the
ones from remove_plugins. """
orig = getattr(self.hook, name)
plugins_to_remove = [plug for plug in remove_plugins if hasattr(plug, name)]
if plugins_to_remove:
hc = _HookCaller(orig.name, orig._hookexec, orig._specmodule_or_class,
orig.spec_opts)
for hookimpl in (orig._wrappers + orig._nonwrappers):
plugin = hookimpl.plugin
if plugin not in plugins_to_remove:
hc._add_hookimpl(hookimpl)
# we also keep track of this hook caller so it
# gets properly removed on plugin unregistration
self._plugin2hookcallers.setdefault(plugin, []).append(hc)
return hc
return orig
class _MultiCall:
""" execute a call into multiple python functions/methods. """
# XXX note that the __multicall__ argument is supported only
# for pytest compatibility reasons. It was never officially
# supported there and is explicitely deprecated since 2.8
# so we can remove it soon, allowing to avoid the below recursion
# in execute() and simplify/speed up the execute loop.
def __init__(self, hook_impls, kwargs, specopts={}):
self.hook_impls = hook_impls
self.kwargs = kwargs
self.kwargs["__multicall__"] = self
self.specopts = specopts
def execute(self):
all_kwargs = self.kwargs
self.results = results = []
firstresult = self.specopts.get("firstresult")
while self.hook_impls:
hook_impl = self.hook_impls.pop()
try:
args = [all_kwargs[argname] for argname in hook_impl.argnames]
except KeyError:
for argname in hook_impl.argnames:
if argname not in all_kwargs:
raise HookCallError(
"hook call must provide argument %r" % (argname,))
if hook_impl.hookwrapper:
return _wrapped_call(hook_impl.function(*args), self.execute)
res = hook_impl.function(*args)
if res is not None:
if firstresult:
return res
results.append(res)
if not firstresult:
return results
def __repr__(self):
status = "%d meths" % (len(self.hook_impls),)
if hasattr(self, "results"):
status = ("%d results, " % len(self.results)) + status
return "<_MultiCall %s, kwargs=%r>" % (status, self.kwargs)
def varnames(func, startindex=None):
""" return argument name tuple for a function, method, class or callable.
In case of a class, its "__init__" method is considered.
For methods the "self" parameter is not included unless you are passing
an unbound method with Python3 (which has no supports for unbound methods)
"""
cache = getattr(func, "__dict__", {})
try:
return cache["_varnames"]
except KeyError:
pass
if inspect.isclass(func):
try:
func = func.__init__
except AttributeError:
return ()
startindex = 1
else:
if not inspect.isfunction(func) and not inspect.ismethod(func):
try:
func = getattr(func, '__call__', func)
except Exception:
return ()
if startindex is None:
startindex = int(inspect.ismethod(func))
try:
rawcode = func.__code__
except AttributeError:
return ()
try:
x = rawcode.co_varnames[startindex:rawcode.co_argcount]
except AttributeError:
x = ()
else:
defaults = func.__defaults__
if defaults:
x = x[:-len(defaults)]
try:
cache["_varnames"] = x
except TypeError:
pass
return x
class _HookRelay:
""" hook holder object for performing 1:N hook calls where N is the number
of registered plugins.
"""
def __init__(self, trace):
self._trace = trace
class _HookCaller(object):
def __init__(self, name, hook_execute, specmodule_or_class=None, spec_opts=None):
self.name = name
self._wrappers = []
self._nonwrappers = []
self._hookexec = hook_execute
if specmodule_or_class is not None:
assert spec_opts is not None
self.set_specification(specmodule_or_class, spec_opts)
def has_spec(self):
return hasattr(self, "_specmodule_or_class")
def set_specification(self, specmodule_or_class, spec_opts):
assert not self.has_spec()
self._specmodule_or_class = specmodule_or_class
specfunc = getattr(specmodule_or_class, self.name)
argnames = varnames(specfunc, startindex=inspect.isclass(specmodule_or_class))
assert "self" not in argnames # sanity check
self.argnames = ["__multicall__"] + list(argnames)
self.spec_opts = spec_opts
if spec_opts.get("historic"):
self._call_history = []
def is_historic(self):
return hasattr(self, "_call_history")
def _remove_plugin(self, plugin):
def remove(wrappers):
for i, method in enumerate(wrappers):
if method.plugin == plugin:
del wrappers[i]
return True
if remove(self._wrappers) is None:
if remove(self._nonwrappers) is None:
raise ValueError("plugin %r not found" % (plugin,))
def _add_hookimpl(self, hookimpl):
if hookimpl.hookwrapper:
methods = self._wrappers
else:
methods = self._nonwrappers
if hookimpl.trylast:
methods.insert(0, hookimpl)
elif hookimpl.tryfirst:
methods.append(hookimpl)
else:
# find last non-tryfirst method
i = len(methods) - 1
while i >= 0 and methods[i].tryfirst:
i -= 1
methods.insert(i + 1, hookimpl)
def __repr__(self):
return "<_HookCaller %r>" % (self.name,)
def __call__(self, **kwargs):
assert not self.is_historic()
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
def call_historic(self, proc=None, kwargs=None):
self._call_history.append((kwargs or {}, proc))
# historizing hooks don't return results
self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
def call_extra(self, methods, kwargs):
""" Call the hook with some additional temporarily participating
methods using the specified kwargs as call parameters. """
old = list(self._nonwrappers), list(self._wrappers)
for method in methods:
opts = dict(hookwrapper=False, trylast=False, tryfirst=False)
hookimpl = HookImpl(None, "<temp>", method, opts)
self._add_hookimpl(hookimpl)
try:
return self(**kwargs)
finally:
self._nonwrappers, self._wrappers = old
def _maybe_apply_history(self, method):
if self.is_historic():
for kwargs, proc in self._call_history:
res = self._hookexec(self, [method], kwargs)
if res and proc is not None:
proc(res[0])
class HookImpl:
def __init__(self, plugin, plugin_name, function, hook_impl_opts):
self.function = function
self.argnames = varnames(self.function)
self.plugin = plugin
self.opts = hook_impl_opts
self.plugin_name = plugin_name
self.__dict__.update(hook_impl_opts)
class PluginValidationError(Exception):
""" plugin failed validation. """
class HookCallError(Exception):
""" Hook was called wrongly. """
if hasattr(inspect, 'signature'):
def _formatdef(func):
return "%s%s" % (
func.__name__,
str(inspect.signature(func))
)
else:
def _formatdef(func):
return "%s%s" % (
func.__name__,
inspect.formatargspec(*inspect.getargspec(func))
)

View File

@ -39,8 +39,9 @@ def pytest_addoption(parser):
'-W', '--pythonwarnings', action='append',
help="set which warnings to report, see -W option of python itself.")
parser.addini("filterwarnings", type="linelist",
help="Each line specifies warning filter pattern which would be passed"
"to warnings.filterwarnings. Process after -W and --pythonwarnings.")
help="Each line specifies a pattern for "
"warnings.filterwarnings. "
"Processed after -W and --pythonwarnings.")
@contextmanager
@ -59,6 +60,11 @@ def catch_warnings_for_item(item):
for arg in inifilters:
_setoption(warnings, arg)
for mark in item.iter_markers():
if mark.name == 'filterwarnings':
for arg in mark.args:
warnings._setoption(arg)
yield
for warning in log:
@ -66,8 +72,10 @@ def catch_warnings_for_item(item):
unicode_warning = False
if compat._PY2 and any(isinstance(m, compat.UNICODE_TYPES) for m in warn_msg.args):
new_args = [compat.safe_str(m) for m in warn_msg.args]
unicode_warning = warn_msg.args != new_args
new_args = []
for m in warn_msg.args:
new_args.append(compat.ascii_escaped(m) if isinstance(m, compat.UNICODE_TYPES) else m)
unicode_warning = list(warn_msg.args) != new_args
warn_msg.args = new_args
msg = warnings.formatwarning(
@ -78,7 +86,7 @@ def catch_warnings_for_item(item):
if unicode_warning:
warnings.warn(
"Warning is using unicode non convertible to ascii, "
"converting to a safe representation:\n %s" % msg,
"converting to a safe representation:\n %s" % msg,
UnicodeWarning)

View File

@ -10,9 +10,7 @@ environment:
- TOXENV: "coveralls"
# note: please use "tox --listenvs" to populate the build matrix below
- TOXENV: "linting"
- TOXENV: "py26"
- TOXENV: "py27"
- TOXENV: "py33"
- TOXENV: "py34"
- TOXENV: "py35"
- TOXENV: "py36"
@ -20,12 +18,16 @@ environment:
- TOXENV: "py27-pexpect"
- TOXENV: "py27-xdist"
- TOXENV: "py27-trial"
- TOXENV: "py35-pexpect"
- TOXENV: "py35-xdist"
- TOXENV: "py35-trial"
- TOXENV: "py27-numpy"
- TOXENV: "py27-pluggymaster"
- TOXENV: "py36-pexpect"
- TOXENV: "py36-xdist"
- TOXENV: "py36-trial"
- TOXENV: "py36-numpy"
- TOXENV: "py36-pluggymaster"
- TOXENV: "py27-nobyte"
- TOXENV: "doctesting"
- TOXENV: "freeze"
- TOXENV: "py35-freeze"
- TOXENV: "docs"
install:
@ -34,7 +36,7 @@ install:
- if "%TOXENV%" == "pypy" call scripts\install-pypy.bat
- C:\Python35\python -m pip install tox
- C:\Python36\python -m pip install --upgrade --pre tox
build: false # Not a C# project, build stuff at the test step instead.

View File

@ -1 +0,0 @@
All old-style specific behavior in current classes in the pytest's API is considered deprecated at this point and will be removed in a future release. This affects Python 2 users only and in rare situations.

View File

@ -1 +0,0 @@
introduce deprecation warnings for legacy marks based parametersets

View File

@ -1 +0,0 @@
Fix decode error in Python 2 for doctests in docstrings.

View File

@ -1 +0,0 @@
Exceptions raised during teardown by finalizers are now suppressed until all finalizers are called, with the initial exception reraised.

View File

@ -1 +0,0 @@
Fix incorrect "collected items" report when specifying tests on the command-line.

View File

@ -1,4 +0,0 @@
``deprecated_call`` in context-manager form now captures deprecation warnings even if
the same warning has already been raised. Also, ``deprecated_call`` will always produce
the same error message (previously it would produce different messages in context-manager vs.
function-call mode).

View File

@ -1 +0,0 @@
Create invoke tasks for updating the vendored packages.

View File

@ -1 +0,0 @@
Fix internal error when trying to detect the start of a recursive traceback.

View File

@ -1 +0,0 @@
Internal code move: move code for pytest.approx/pytest.raises to own files in order to cut down the size of python.py

View File

@ -1 +0,0 @@
Explicitly state for which hooks the calls stop after the first non-None result.

View File

@ -1 +0,0 @@
Update copyright dates in LICENSE, README.rst and in the documentation.

View File

@ -1 +0,0 @@
Now test function objects have a ``pytestmark`` attribute containing a list of marks applied directly to the test function, as opposed to marks inherited from parent classes or modules.

View File

@ -0,0 +1 @@
A rare race-condition which might result in corrupted ``.pyc`` files on Windows has been hopefully solved.

View File

@ -0,0 +1 @@
``pytest`` now depends on the `python-atomicwrites <https://github.com/untitaker/python-atomicwrites>`_ library.

View File

@ -0,0 +1 @@
Support for Python 3.7's builtin ``breakpoint()`` method, see `Using the builtin breakpoint function <https://docs.pytest.org/en/latest/usage.html#breakpoint-builtin>`_ for details.

2
changelog/3290.feature Normal file
View File

@ -0,0 +1,2 @@
``monkeypatch`` now supports a ``context()`` function which acts as a context manager which undoes all patching done
within the ``with`` block.

View File

@ -0,0 +1,3 @@
pytest not longer changes the log level of the root logger when the
``log-level`` parameter has greater numeric value than that of the level of
the root logger, which makes it play better with custom logging configuration in user code.

1
changelog/3317.feature Normal file
View File

@ -0,0 +1 @@
introduce correct per node mark handling and deprecate the always incorrect existing mark handling

View File

@ -0,0 +1 @@
Remove internal ``_pytest.terminal.flatten`` function in favor of ``more_itertools.collapse``.

1
changelog/3339.trivial Normal file
View File

@ -0,0 +1 @@
Import some modules from ``collections`` instead of ``collections.abc`` as the former modules trigger ``DeprecationWarning`` in Python 3.7.

View File

@ -0,0 +1 @@
``pytest.raises`` now raises ``TypeError`` when receiving an unknown keyword argument.

2
changelog/3360.trivial Normal file
View File

@ -0,0 +1,2 @@
record_property is no longer experimental, removing the warnings was forgotten.

View File

@ -0,0 +1 @@
``pytest.raises`` now works with exception classes that look like iterables.

32
changelog/README.rst Normal file
View File

@ -0,0 +1,32 @@
This directory contains "newsfragments" which are short files that contain a small **ReST**-formatted
text that will be added to the next ``CHANGELOG``.
The ``CHANGELOG`` will be read by users, so this description should be aimed to pytest users
instead of describing internal changes which are only relevant to the developers.
Make sure to use full sentences with correct case and punctuation, for example::
Fix issue with non-ascii messages from the ``warnings`` module.
Each file should be named like ``<ISSUE>.<TYPE>.rst``, where
``<ISSUE>`` is an issue number, and ``<TYPE>`` is one of:
* ``feature``: new user facing features, like new command-line options and new behavior.
* ``bugfix``: fixes a reported bug.
* ``doc``: documentation improvement, like rewording an entire session or adding missing docs.
* ``removal``: feature deprecation or removal.
* ``vendor``: changes in packages vendored in pytest.
* ``trivial``: fixing a small typo or internal change that might be noteworthy.
So for example: ``123.feature.rst``, ``456.bugfix.rst``.
If your PR fixes an issue, use that number here. If there is no issue,
then after you submit the PR and get the PR number you can add a
changelog using that instead.
If you are not sure what issue type to use, don't hesitate to ask in your PR.
Note that the ``towncrier`` tool will automatically
reflow your text, so it will work best if you stick to a single paragraph, but multiple sentences and links are OK
and encouraged. You can install ``towncrier`` and then run ``towncrier --draft``
if you want to get a preview of how your change will look in the final release notes.

View File

@ -13,7 +13,8 @@
{% if definitions[category]['showcontent'] %}
{% for text, values in sections[section][category]|dictsort(by='value') %}
- {{ text }}{% if category != 'vendor' %} (`{{ values[0] }} <https://github.com/pytest-dev/pytest/issues/{{ values[0][1:] }}>`_){% endif %}
{% set issue_joiner = joiner(', ') %}
- {{ text }}{% if category != 'vendor' %} ({% for value in values|sort %}{{ issue_joiner() }}`{{ value }} <https://github.com/pytest-dev/pytest/issues/{{ value[1:] }}>`_{% endfor %}){% endif %}
{% endfor %}

View File

@ -13,8 +13,6 @@ PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
REGENDOC_ARGS := \
--normalize "/={8,} (.*) ={8,}/======= \1 ========/" \
--normalize "/_{8,} (.*) _{8,}/_______ \1 ________/" \
--normalize "/in \d+.\d+ seconds/in 0.12 seconds/" \
--normalize "@/tmp/pytest-of-.*/pytest-\d+@PYTEST_TMPDIR@" \
--normalize "@pytest-(\d+)\\.[^ ,]+@pytest-\1.x.y@" \

View File

@ -2,14 +2,16 @@
<ul>
<li><a href="{{ pathto('index') }}">Home</a></li>
<li><a href="{{ pathto('contents') }}">Contents</a></li>
<li><a href="{{ pathto('getting-started') }}">Install</a></li>
<li><a href="{{ pathto('contents') }}">Contents</a></li>
<li><a href="{{ pathto('reference') }}">Reference</a></li>
<li><a href="{{ pathto('example/index') }}">Examples</a></li>
<li><a href="{{ pathto('customize') }}">Customize</a></li>
<li><a href="{{ pathto('contact') }}">Contact</a></li>
<li><a href="{{ pathto('talks') }}">Talks/Posts</a></li>
<li><a href="{{ pathto('changelog') }}">Changelog</a></li>
<li><a href="{{ pathto('contributing') }}">Contributing</a></li>
<li><a href="{{ pathto('backwards-compatibility') }}">Backwards Compatibility</a></li>
<li><a href="{{ pathto('license') }}">License</a></li>
<li><a href="{{ pathto('contact') }}">Contact Channels</a></li>
</ul>
{%- if display_toc %}

View File

@ -1,7 +1,5 @@
<h3>Useful Links</h3>
<ul>
<li><a href="{{ pathto('index') }}">The pytest Website</a></li>
<li><a href="{{ pathto('contributing') }}">Contribution Guide</a></li>
<li><a href="https://pypi.python.org/pypi/pytest">pytest @ PyPI</a></li>
<li><a href="https://github.com/pytest-dev/pytest/">pytest @ GitHub</a></li>
<li><a href="http://plugincompat.herokuapp.com/">3rd party plugins</a></li>

View File

@ -6,6 +6,20 @@ Release announcements
:maxdepth: 2
release-3.5.0
release-3.4.2
release-3.4.1
release-3.4.0
release-3.3.2
release-3.3.1
release-3.3.0
release-3.2.5
release-3.2.4
release-3.2.3
release-3.2.2
release-3.2.1
release-3.2.0
release-3.1.3
release-3.1.2
release-3.1.1
release-3.1.0

View File

@ -62,7 +62,7 @@ holger krekel
- fix issue655: work around different ways that cause python2/3
to leak sys.exc_info into fixtures/tests causing failures in 3rd party code
- fix issue615: assertion re-writing did not correctly escape % signs
- fix issue615: assertion rewriting did not correctly escape % signs
when formatting boolean operations, which tripped over mixing
booleans with modulo operators. Thanks to Tom Viner for the report,
triaging and fix.

View File

@ -0,0 +1,23 @@
pytest-3.1.3
=======================================
pytest 3.1.3 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Antoine Legrand
* Bruno Oliveira
* Max Moroz
* Raphael Pierzina
* Ronny Pfannschmidt
* Ryan Fitzpatrick
Happy testing,
The pytest Development Team

View File

@ -0,0 +1,48 @@
pytest-3.2.0
=======================================
The pytest team is proud to announce the 3.2.0 release!
pytest is a mature Python testing tool with more than a 1600 tests
against itself, passing on many different interpreters and platforms.
This release contains a number of bugs fixes and improvements, so users are encouraged
to take a look at the CHANGELOG:
http://doc.pytest.org/en/latest/changelog.html
For complete documentation, please visit:
http://docs.pytest.org
As usual, you can upgrade from pypi via:
pip install -U pytest
Thanks to all who contributed to this release, among them:
* Alex Hartoto
* Andras Tim
* Bruno Oliveira
* Daniel Hahler
* Florian Bruhin
* Floris Bruynooghe
* John Still
* Jordan Moldow
* Kale Kundert
* Lawrence Mitchell
* Llandy Riveron Del Risco
* Maik Figura
* Martin Altmayer
* Mihai Capotă
* Nathaniel Waisbrot
* Nguyễn Hồng Quân
* Pauli Virtanen
* Raphael Pierzina
* Ronny Pfannschmidt
* Segev Finer
* V.Kuznetsov
Happy testing,
The Pytest Development Team

View File

@ -0,0 +1,22 @@
pytest-3.2.1
=======================================
pytest 3.2.1 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Alex Gaynor
* Bruno Oliveira
* Florian Bruhin
* Ronny Pfannschmidt
* Srinivas Reddy Thatiparthy
Happy testing,
The pytest Development Team

View File

@ -0,0 +1,28 @@
pytest-3.2.2
=======================================
pytest 3.2.2 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Andreas Pelme
* Antonio Hidalgo
* Bruno Oliveira
* Felipe Dau
* Fernando Macedo
* Jesús Espino
* Joan Massich
* Joe Talbott
* Kirill Pinchuk
* Ronny Pfannschmidt
* Xuan Luong
Happy testing,
The pytest Development Team

Some files were not shown because too many files have changed in this diff Show More