Merge remote-tracking branch 'upstream/features' into ApaDoctor/disable-repeated-fixture

This commit is contained in:
Bruno Oliveira 2018-04-23 22:24:53 -03:00
commit 132fb61eba
222 changed files with 16500 additions and 7521 deletions

View File

@ -2,6 +2,3 @@
omit = omit =
# standlonetemplate is read dynamically and tested by test_genscript # standlonetemplate is read dynamically and tested by test_genscript
*standalonetemplate.py *standalonetemplate.py
# oldinterpret could be removed, as it is no longer used in py26+
*oldinterpret.py
vendored_packages

View File

@ -1,15 +1,14 @@
Thanks for submitting a PR, your contribution is really appreciated! Thanks for submitting a PR, your contribution is really appreciated!
Here's a quick checklist that should be present in PRs: Here's a quick checklist that should be present in PRs (you can delete this text from the final description, this is
just a guideline):
- [ ] Add a new news fragment into the changelog folder - [ ] Create a new changelog file in the `changelog` folder, with a name like `<ISSUE NUMBER>.<TYPE>.rst`. See [changelog/README.rst](/changelog/README.rst) for details.
* name it `$issue_id.$type` for example (588.bug) - [ ] Target the `master` branch for bug fixes, documentation updates and trivial changes.
* if you don't have an issue_id change it to the pr id after creating the pr - [ ] Target the `features` branch for new features and removals/deprecations.
* ensure type is one of `removal`, `feature`, `bugfix`, `vendor`, `doc` or `trivial` - [ ] Include documentation when adding new features.
* Make sure to use full sentences with correct case and punctuation, for example: "Fix issue with non-ascii contents in doctest text files." - [ ] Include new tests or update existing tests when applicable.
- [ ] Target: for `bugfix`, `vendor`, `doc` or `trivial` fixes, target `master`; for removals or features target `features`;
- [ ] Make sure to include reasonable tests for your change if necessary
Unless your change is a trivial or a documentation fix (e.g., a typo or reword of a small section) please: Unless your change is trivial or a small documentation fix (e.g., a typo or reword of a small section) please:
- [ ] Add yourself to `AUTHORS`; - [ ] Add yourself to `AUTHORS` in alphabetical order;

1
.gitignore vendored
View File

@ -33,6 +33,7 @@ env/
3rdparty/ 3rdparty/
.tox .tox
.cache .cache
.pytest_cache
.coverage .coverage
.ropeproject .ropeproject
.idea .idea

View File

@ -1,42 +1,58 @@
sudo: false sudo: false
language: python language: python
python: python:
- '3.5' - '3.6'
# command to install dependencies install:
install: "pip install -U tox" - pip install --upgrade --pre tox
# # command to run tests
env: env:
matrix: matrix:
# coveralls is not listed in tox's envlist, but should run in travis # coveralls is not listed in tox's envlist, but should run in travis
- TOXENV=coveralls - TOXENV=coveralls
# note: please use "tox --listenvs" to populate the build matrix below # note: please use "tox --listenvs" to populate the build matrix below
- TOXENV=linting - TOXENV=linting
- TOXENV=py26
- TOXENV=py27 - TOXENV=py27
- TOXENV=py33
- TOXENV=py34 - TOXENV=py34
- TOXENV=py35 - TOXENV=py36
- TOXENV=pypy
- TOXENV=py27-pexpect - TOXENV=py27-pexpect
- TOXENV=py27-xdist - TOXENV=py27-xdist
- TOXENV=py27-trial - TOXENV=py27-trial
- TOXENV=py35-pexpect - TOXENV=py27-numpy
- TOXENV=py35-xdist - TOXENV=py27-pluggymaster
- TOXENV=py35-trial - TOXENV=py36-pexpect
- TOXENV=py36-xdist
- TOXENV=py36-trial
- TOXENV=py36-numpy
- TOXENV=py36-pluggymaster
- TOXENV=py27-nobyte - TOXENV=py27-nobyte
- TOXENV=doctesting - TOXENV=doctesting
- TOXENV=freeze
- TOXENV=docs - TOXENV=docs
matrix: jobs:
include: include:
- env: TOXENV=py36 - env: TOXENV=pypy
python: 'pypy-5.4'
- env: TOXENV=py35
python: '3.5'
- env: TOXENV=py35-freeze
python: '3.5'
- env: TOXENV=py37
python: 'nightly'
- stage: deploy
python: '3.6' python: '3.6'
- env: TOXENV=py37 env:
python: 'nightly' install: pip install -U setuptools setuptools_scm
allow_failures: script: skip
- env: TOXENV=py37 deploy:
python: 'nightly' provider: pypi
user: nicoddemus
distributions: sdist bdist_wheel
skip_upload_docs: true
password:
secure: xanTgTUu6XDQVqB/0bwJQXoDMnU5tkwZc5koz6mBkkqZhKdNOi2CLoC1XhiSZ+ah24l4V1E0GAqY5kBBcy9d7NVe4WNg4tD095LsHw+CRU6/HCVIFfyk2IZ+FPAlguesCcUiJSXOrlBF+Wj68wEvLoK7EoRFbJeiZ/f91Ww1sbtDlqXABWGHrmhPJL5Wva7o7+wG7JwJowqdZg1pbQExsCc7b53w4v2RBu3D6TJaTAzHiVsW+nUSI67vKI/uf+cR/OixsTfy37wlHgSwihYmrYLFls3V0bSpahCim3bCgMaFZx8S8xrdgJ++PzBCof2HeflFKvW+VCkoYzGEG4NrTWJoNz6ni4red9GdvfjGH3YCjAKS56h9x58zp2E5rpsb/kVq5/45xzV+dq6JRuhQ1nJWjBC6fSKAc/bfwnuFK3EBxNLkvBssLHvsNjj5XG++cB8DdS9wVGUqjpoK4puaXUWFqy4q3S9F86HEsKNgExtieA9qNx+pCIZVs6JCXZNjr0I5eVNzqJIyggNgJG6RyravsU35t9Zd9doL5g4Y7UKmAGTn1Sz24HQ4sMQgXdm2SyD8gEK5je4tlhUvfGtDvMSlstq71kIn9nRpFnqB6MFlbYSEAZmo8dGbCquoUc++6Rum208wcVbrzzVtGlXB/Ow9AbFMYeAGA0+N/K1e59c=
on:
tags: true
repo: pytest-dev/pytest
script: tox --recreate script: tox --recreate

43
AUTHORS
View File

@ -3,19 +3,25 @@ merlinux GmbH, Germany, office at merlinux eu
Contributors include:: Contributors include::
Aaron Coleman
Abdeali JK Abdeali JK
Abhijeet Kasurde Abhijeet Kasurde
Ahn Ki-Wook Ahn Ki-Wook
Alan Velasco
Alexander Johnson Alexander Johnson
Alexei Kozlenok Alexei Kozlenok
Anatoly Bubenkoff Anatoly Bubenkoff
Anders Hovmöller
Andras Tim
Andreas Zeidler Andreas Zeidler
Andrzej Ostrowski Andrzej Ostrowski
Andy Freeland Andy Freeland
Anthon van der Neut Anthon van der Neut
Anthony Shaw
Anthony Sottile Anthony Sottile
Antony Lee Antony Lee
Armin Rigo Armin Rigo
Aron Coyle
Aron Curzon Aron Curzon
Aviv Palivoda Aviv Palivoda
Barney Gale Barney Gale
@ -24,11 +30,14 @@ Benjamin Peterson
Bernard Pratz Bernard Pratz
Bob Ippolito Bob Ippolito
Brian Dorsey Brian Dorsey
Brian Maissy
Brian Okken Brian Okken
Brianna Laugher Brianna Laugher
Bruno Oliveira Bruno Oliveira
Cal Leeming Cal Leeming
Carl Friedrich Bolz Carl Friedrich Bolz
Carlos Jenkins
Ceridwen
Charles Cloud Charles Cloud
Charnjit SiNGH (CCSJ) Charnjit SiNGH (CCSJ)
Chris Lamb Chris Lamb
@ -36,6 +45,7 @@ Christian Boelsen
Christian Theunert Christian Theunert
Christian Tismer Christian Tismer
Christopher Gilling Christopher Gilling
Cyrus Maden
Daniel Grana Daniel Grana
Daniel Hahler Daniel Hahler
Daniel Nuri Daniel Nuri
@ -45,6 +55,7 @@ Dave Hunt
David Díaz-Barquero David Díaz-Barquero
David Mohr David Mohr
David Vierra David Vierra
Daw-Ran Liou
Denis Kirisov Denis Kirisov
Diego Russo Diego Russo
Dmitry Dygalo Dmitry Dygalo
@ -63,6 +74,7 @@ Feng Ma
Florian Bruhin Florian Bruhin
Floris Bruynooghe Floris Bruynooghe
Gabriel Reis Gabriel Reis
George Kussumoto
Georgy Dyuldin Georgy Dyuldin
Graham Horler Graham Horler
Greg Price Greg Price
@ -70,33 +82,45 @@ Grig Gheorghiu
Grigorii Eremeev (budulianin) Grigorii Eremeev (budulianin)
Guido Wesdorp Guido Wesdorp
Harald Armin Massa Harald Armin Massa
Henk-Jaap Wagenaar
Hugo van Kemenade
Hui Wang (coldnight) Hui Wang (coldnight)
Ian Bicking Ian Bicking
Ian Lesperance
Jaap Broekhuizen Jaap Broekhuizen
Jan Balster Jan Balster
Janne Vanhala Janne Vanhala
Jason R. Coombs Jason R. Coombs
Javier Domingo Cansino Javier Domingo Cansino
Javier Romero Javier Romero
Jeff Rackauckas
Jeff Widman Jeff Widman
John Eddie Ayson
John Towler John Towler
Jon Sonesen Jon Sonesen
Jonas Obrist Jonas Obrist
Jordan Guymon Jordan Guymon
Jordan Moldow
Jordan Speicher
Joshua Bronson Joshua Bronson
Jurko Gospodnetić Jurko Gospodnetić
Justyna Janczyszyn Justyna Janczyszyn
Kale Kundert Kale Kundert
Katarzyna Jachim Katarzyna Jachim
Katerina Koukiou
Kevin Cox Kevin Cox
Kodi B. Arfer Kodi B. Arfer
Kostis Anagnostopoulos
Lawrence Mitchell
Lee Kamentsky Lee Kamentsky
Lev Maximov Lev Maximov
Llandy Riveron Del Risco
Loic Esteve Loic Esteve
Lukas Bednar Lukas Bednar
Luke Murphy Luke Murphy
Maciek Fijalkowski Maciek Fijalkowski
Maho Maho
Maik Figura
Mandeep Bhutani Mandeep Bhutani
Manuel Krebber Manuel Krebber
Marc Schlaich Marc Schlaich
@ -104,6 +128,7 @@ Marcin Bachry
Mark Abramowitz Mark Abramowitz
Markus Unterwaditzer Markus Unterwaditzer
Martijn Faassen Martijn Faassen
Martin Altmayer
Martin K. Scherer Martin K. Scherer
Martin Prusse Martin Prusse
Mathieu Clabaut Mathieu Clabaut
@ -111,28 +136,34 @@ Matt Bachmann
Matt Duck Matt Duck
Matt Williams Matt Williams
Matthias Hafner Matthias Hafner
Maxim Filipenko
mbyt mbyt
Michael Aquilina Michael Aquilina
Michael Birtwell Michael Birtwell
Michael Droettboom Michael Droettboom
Michael Seifert Michael Seifert
Michal Wajszczuk Michal Wajszczuk
Mihai Capotă
Mike Lundy Mike Lundy
Nathaniel Waisbrot
Ned Batchelder Ned Batchelder
Neven Mundar Neven Mundar
Nicolas Delaby Nicolas Delaby
Oleg Pidsadnyi Oleg Pidsadnyi
Oleg Sushchenko
Oliver Bestwalter Oliver Bestwalter
Omar Kohl Omar Kohl
Omer Hadari Omer Hadari
Patrick Hayes Patrick Hayes
Paweł Adamczak Paweł Adamczak
Pedro Algarvio
Pieter Mulder Pieter Mulder
Piotr Banaszkiewicz Piotr Banaszkiewicz
Punyashloka Biswal Punyashloka Biswal
Quentin Pradet Quentin Pradet
Ralf Schmitt Ralf Schmitt
Ran Benita Ran Benita
Raphael Castaneda
Raphael Pierzina Raphael Pierzina
Raquel Alegre Raquel Alegre
Ravi Chandra Ravi Chandra
@ -143,25 +174,37 @@ Ronny Pfannschmidt
Ross Lawley Ross Lawley
Russel Winder Russel Winder
Ryan Wooden Ryan Wooden
Samuel Dion-Girardeau
Samuele Pedroni Samuele Pedroni
Segev Finer Segev Finer
Simon Gomizelj Simon Gomizelj
Skylar Downes Skylar Downes
Srinivas Reddy Thatiparthy
Stefan Farmbauer Stefan Farmbauer
Stefan Zimmermann Stefan Zimmermann
Stefano Taschini Stefano Taschini
Steffen Allner Steffen Allner
Stephan Obermann Stephan Obermann
Tarcisio Fischer
Tareq Alayan Tareq Alayan
Ted Xiao Ted Xiao
Thomas Grainger Thomas Grainger
Thomas Hisch
Tim Strazny
Tom Dalton
Tom Viner Tom Viner
Trevor Bekolay Trevor Bekolay
Tyler Goodlet Tyler Goodlet
Tzu-ping Chung
Vasily Kuznetsov Vasily Kuznetsov
Victor Uriarte Victor Uriarte
Vidar T. Fauske Vidar T. Fauske
Vitaly Lashmanov Vitaly Lashmanov
Vlad Dragos Vlad Dragos
William Lee
Wouter van Ackooy Wouter van Ackooy
Xuan Luong
Xuecong Liao Xuecong Liao
Zoltán Máté
Roland Puntaier
Allan Feldman

File diff suppressed because it is too large Load Diff

View File

@ -34,13 +34,13 @@ If you are reporting a bug, please include:
* Your operating system name and version. * Your operating system name and version.
* Any details about your local setup that might be helpful in troubleshooting, * Any details about your local setup that might be helpful in troubleshooting,
specifically Python interpreter version, specifically the Python interpreter version, installed libraries, and pytest
installed libraries and pytest version. version.
* Detailed steps to reproduce the bug. * Detailed steps to reproduce the bug.
If you can write a demonstration test that currently fails but should pass (xfail), If you can write a demonstration test that currently fails but should pass
that is a very useful commit to make as well, even if you can't find how (xfail), that is a very useful commit to make as well, even if you cannot
to fix the bug yet. fix the bug itself.
.. _fixbugs: .. _fixbugs:
@ -49,7 +49,7 @@ Fix bugs
-------- --------
Look through the GitHub issues for bugs. Here is a filter you can use: Look through the GitHub issues for bugs. Here is a filter you can use:
https://github.com/pytest-dev/pytest/labels/bug https://github.com/pytest-dev/pytest/labels/type%3A%20bug
:ref:`Talk <contact>` to developers to find out how you can fix specific bugs. :ref:`Talk <contact>` to developers to find out how you can fix specific bugs.
@ -120,7 +120,7 @@ the following:
- PyPI presence with a ``setup.py`` that contains a license, ``pytest-`` - PyPI presence with a ``setup.py`` that contains a license, ``pytest-``
prefixed name, version number, authors, short and long description. prefixed name, version number, authors, short and long description.
- a ``tox.ini`` for running tests using `tox <http://tox.testrun.org>`_. - a ``tox.ini`` for running tests using `tox <https://tox.readthedocs.io>`_.
- a ``README.txt`` describing how to use the plugin and on which - a ``README.txt`` describing how to use the plugin and on which
platforms it runs. platforms it runs.
@ -158,19 +158,41 @@ As stated, the objective is to share maintenance and avoid "plugin-abandon".
.. _`pull requests`: .. _`pull requests`:
.. _pull-requests: .. _pull-requests:
Preparing Pull Requests on GitHub Preparing Pull Requests
--------------------------------- -----------------------
.. note:: Short version
What is a "pull request"? It informs project's core developers about the ~~~~~~~~~~~~~
changes you want to review and merge. Pull requests are stored on
`GitHub servers <https://github.com/pytest-dev/pytest/pulls>`_.
Once you send a pull request, we can discuss its potential modifications and
even add more commits to it later on.
There's an excellent tutorial on how Pull Requests work in the #. Fork the repository;
`GitHub Help Center <https://help.github.com/articles/using-pull-requests/>`_, #. Target ``master`` for bugfixes and doc changes;
but here is a simple overview: #. Target ``features`` for new features or functionality changes.
#. Follow **PEP-8**. There's a ``tox`` command to help fixing it: ``tox -e fix-lint``.
#. Tests are run using ``tox``::
tox -e linting,py27,py36
The test environments above are usually enough to cover most cases locally.
#. Write a ``changelog`` entry: ``changelog/2574.bugfix``, use issue id number
and one of ``bugfix``, ``removal``, ``feature``, ``vendor``, ``doc`` or
``trivial`` for the issue type.
#. Unless your change is a trivial or a documentation fix (e.g., a typo or reword of a small section) please
add yourself to the ``AUTHORS`` file, in alphabetical order;
Long version
~~~~~~~~~~~~
What is a "pull request"? It informs the project's core developers about the
changes you want to review and merge. Pull requests are stored on
`GitHub servers <https://github.com/pytest-dev/pytest/pulls>`_.
Once you send a pull request, we can discuss its potential modifications and
even add more commits to it later on. There's an excellent tutorial on how Pull
Requests work in the
`GitHub Help Center <https://help.github.com/articles/using-pull-requests/>`_.
Here is a simple overview, with pytest-specific bits:
#. Fork the #. Fork the
`pytest GitHub repository <https://github.com/pytest-dev/pytest>`__. It's `pytest GitHub repository <https://github.com/pytest-dev/pytest>`__. It's
@ -214,12 +236,18 @@ but here is a simple overview:
This command will run tests via the "tox" tool against Python 2.7 and 3.6 This command will run tests via the "tox" tool against Python 2.7 and 3.6
and also perform "lint" coding-style checks. and also perform "lint" coding-style checks.
#. You can now edit your local working copy. #. You can now edit your local working copy. Please follow PEP-8.
You can now make the changes you want and run the tests again as necessary. You can now make the changes you want and run the tests again as necessary.
To run tests on Python 2.7 and pass options to pytest (e.g. enter pdb on If you have too much linting errors, try running::
failure) to pytest you can do::
$ tox -e fix-lint
To fix pep8 related errors.
You can pass different options to ``tox``. For example, to run tests on Python 2.7 and pass options to pytest
(e.g. enter pdb on failure) to pytest you can do::
$ tox -e py27 -- --pdb $ tox -e py27 -- --pdb
@ -232,9 +260,11 @@ but here is a simple overview:
$ git commit -a -m "<commit message>" $ git commit -a -m "<commit message>"
$ git push -u $ git push -u
Make sure you add a message to ``CHANGELOG.rst`` and add yourself to #. Create a new changelog entry in ``changelog``. The file should be named ``<issueid>.<type>``,
``AUTHORS``. If you are unsure about either of these steps, submit your where *issueid* is the number of the issue related to the change and *type* is one of
pull request and we'll help you fix it up. ``bugfix``, ``removal``, ``feature``, ``vendor``, ``doc`` or ``trivial``.
#. Add yourself to ``AUTHORS`` file if not there yet, in alphabetical order.
#. Finally, submit a pull request through the GitHub website using this data:: #. Finally, submit a pull request through the GitHub website using this data::
@ -246,3 +276,15 @@ but here is a simple overview:
base: features # if it's a feature base: features # if it's a feature
Joining the Development Team
----------------------------
Anyone who has successfully seen through a pull request which did not
require any extra work from the development team to merge will
themselves gain commit access if they so wish (if we forget to ask please send a friendly
reminder). This does not mean your workflow to contribute changes,
everyone goes through the same pull-request-and-review process and
no-one merges their own pull requests unless already approved. It does however mean you can
participate in the development process more fully since you can merge
pull requests from other contributors yourself after having reviewed
them.

View File

@ -1,5 +1,9 @@
How to release pytest Release Procedure
-------------------------------------------- -----------------
Our current policy for releasing is to aim for a bugfix every few weeks and a minor release every 2-3 months. The idea
is to get fixes and new features out instead of trying to cram a ton of features into a release and by consequence
taking a lot of time to make a new one.
.. important:: .. important::
@ -8,7 +12,7 @@ How to release pytest
#. Install development dependencies in a virtual environment with:: #. Install development dependencies in a virtual environment with::
pip3 install -r tasks/requirements.txt pip3 install -U -r tasks/requirements.txt
#. Create a branch ``release-X.Y.Z`` with the version for the release. #. Create a branch ``release-X.Y.Z`` with the version for the release.
@ -18,44 +22,28 @@ How to release pytest
Ensure your are in a clean work tree. Ensure your are in a clean work tree.
#. Generate docs, changelog, announcements and upload a package to #. Generate docs, changelog, announcements and a **local** tag::
your ``devpi`` staging server::
invoke generate.pre_release <VERSION> <DEVPI USER> --password <DEVPI PASSWORD> invoke generate.pre-release <VERSION>
If ``--password`` is not given, it is assumed the user is already logged in ``devpi``.
If you don't have an account, please ask for one.
#. Open a PR for this branch targeting ``master``. #. Open a PR for this branch targeting ``master``.
#. Test the package #. After all tests pass and the PR has been approved, publish to PyPI by pushing the tag::
* **Manual method** git push git@github.com:pytest-dev/pytest.git <VERSION>
Run from multiple machines:: Wait for the deploy to complete, then make sure it is `available on PyPI <https://pypi.org/project/pytest>`_.
devpi use https://devpi.net/USER/dev #. Send an email announcement with the contents from::
devpi test pytest==VERSION
Check that tests pass for relevant combinations with:: doc/en/announce/release-<VERSION>.rst
devpi list pytest To the following mailing lists:
* **CI servers** * pytest-dev@python.org (all releases)
* python-announce-list@python.org (all releases)
* testing-in-python@lists.idyll.org (only major/minor releases)
Configure a repository as per-instructions on And announce it on `Twitter <https://twitter.com/>`_ with the ``#pytest`` hashtag.
devpi-cloud-test_ to test the package on Travis_ and AppVeyor_.
All test environments should pass.
#. Publish to PyPI:: #. After a minor/major release, merge ``release-X.Y.Z`` into ``master`` and push (or open a PR).
invoke generate.publish_release <VERSION> <DEVPI USER> <PYPI_NAME>
where PYPI_NAME is the name of pypi.python.org as configured in your ``~/.pypirc``
file `for devpi <http://doc.devpi.net/latest/quickstart-releaseprocess.html?highlight=pypirc#devpi-push-releasing-to-an-external-index>`_.
#. After a minor/major release, merge ``features`` into ``master`` and push (or open a PR).
.. _devpi-cloud-test: https://github.com/obestwalter/devpi-cloud-test
.. _AppVeyor: https://www.appveyor.com/
.. _Travis: https://travis-ci.org

View File

@ -23,6 +23,9 @@
.. image:: https://ci.appveyor.com/api/projects/status/mrgbjaua7t33pg6b?svg=true .. image:: https://ci.appveyor.com/api/projects/status/mrgbjaua7t33pg6b?svg=true
:target: https://ci.appveyor.com/project/pytestbot/pytest :target: https://ci.appveyor.com/project/pytestbot/pytest
.. image:: https://www.codetriage.com/pytest-dev/pytest/badges/users.svg
:target: https://www.codetriage.com/pytest-dev/pytest
The ``pytest`` framework makes it easy to write small tests, yet The ``pytest`` framework makes it easy to write small tests, yet
scales to support complex functional testing for applications and libraries. scales to support complex functional testing for applications and libraries.
@ -76,9 +79,9 @@ Features
- Can run `unittest <http://docs.pytest.org/en/latest/unittest.html>`_ (or trial), - Can run `unittest <http://docs.pytest.org/en/latest/unittest.html>`_ (or trial),
`nose <http://docs.pytest.org/en/latest/nose.html>`_ test suites out of the box; `nose <http://docs.pytest.org/en/latest/nose.html>`_ test suites out of the box;
- Python2.6+, Python3.3+, PyPy-2.3, Jython-2.5 (untested); - Python 2.7, Python 3.4+, PyPy 2.3, Jython 2.5 (untested);
- Rich plugin architecture, with over 150+ `external plugins <http://docs.pytest.org/en/latest/plugins.html#installing-external-plugins-searching>`_ and thriving community; - Rich plugin architecture, with over 315+ `external plugins <http://plugincompat.herokuapp.com>`_ and thriving community;
Documentation Documentation

View File

@ -4,9 +4,6 @@ needs argcomplete>=0.5.6 for python 3.2/3.3 (older versions fail
to find the magic string, so _ARGCOMPLETE env. var is never set, and to find the magic string, so _ARGCOMPLETE env. var is never set, and
this does not need special code. this does not need special code.
argcomplete does not support python 2.5 (although the changes for that
are minor).
Function try_argcomplete(parser) should be called directly before Function try_argcomplete(parser) should be called directly before
the call to ArgumentParser.parse_args(). the call to ArgumentParser.parse_args().
@ -62,21 +59,24 @@ import sys
import os import os
from glob import glob from glob import glob
class FastFilesCompleter:
class FastFilesCompleter(object):
'Fast file completer class' 'Fast file completer class'
def __init__(self, directories=True): def __init__(self, directories=True):
self.directories = directories self.directories = directories
def __call__(self, prefix, **kwargs): def __call__(self, prefix, **kwargs):
"""only called on non option completions""" """only called on non option completions"""
if os.path.sep in prefix[1:]: # if os.path.sep in prefix[1:]:
prefix_dir = len(os.path.dirname(prefix) + os.path.sep) prefix_dir = len(os.path.dirname(prefix) + os.path.sep)
else: else:
prefix_dir = 0 prefix_dir = 0
completion = [] completion = []
globbed = [] globbed = []
if '*' not in prefix and '?' not in prefix: if '*' not in prefix and '?' not in prefix:
if prefix[-1] == os.path.sep: # we are on unix, otherwise no bash # we are on unix, otherwise no bash
if not prefix or prefix[-1] == os.path.sep:
globbed.extend(glob(prefix + '.*')) globbed.extend(glob(prefix + '.*'))
prefix += '*' prefix += '*'
globbed.extend(glob(prefix)) globbed.extend(glob(prefix))
@ -96,7 +96,8 @@ if os.environ.get('_ARGCOMPLETE'):
filescompleter = FastFilesCompleter() filescompleter = FastFilesCompleter()
def try_argcomplete(parser): def try_argcomplete(parser):
argcomplete.autocomplete(parser) argcomplete.autocomplete(parser, always_complete_options=False)
else: else:
def try_argcomplete(parser): pass def try_argcomplete(parser):
pass
filescompleter = None filescompleter = None

View File

@ -5,6 +5,7 @@
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import types import types
def format_exception_only(etype, value): def format_exception_only(etype, value):
"""Format the exception part of a traceback. """Format the exception part of a traceback.
@ -62,6 +63,7 @@ def format_exception_only(etype, value):
lines.append(_format_final_exc_line(stype, value)) lines.append(_format_final_exc_line(stype, value))
return lines return lines
def _format_final_exc_line(etype, value): def _format_final_exc_line(etype, value):
"""Return a list of a single line -- normal case for format_exception_only""" """Return a list of a single line -- normal case for format_exception_only"""
valuestr = _some_str(value) valuestr = _some_str(value)
@ -71,6 +73,7 @@ def _format_final_exc_line(etype, value):
line = "%s: %s\n" % (etype, valuestr) line = "%s: %s\n" % (etype, valuestr)
return line return line
def _some_str(value): def _some_str(value):
try: try:
return unicode(value) return unicode(value)

View File

@ -1,6 +1,10 @@
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import inspect
import sys import sys
import traceback
from inspect import CO_VARARGS, CO_VARKEYWORDS from inspect import CO_VARARGS, CO_VARKEYWORDS
import attr
import re import re
from weakref import ref from weakref import ref
from _pytest.compat import _PY2, _PY3, PY35, safe_str from _pytest.compat import _PY2, _PY3, PY35, safe_str
@ -8,8 +12,6 @@ from _pytest.compat import _PY2, _PY3, PY35, safe_str
import py import py
builtin_repr = repr builtin_repr = repr
reprlib = py.builtin._tryimport('repr', 'reprlib')
if _PY3: if _PY3:
from traceback import format_exception_only from traceback import format_exception_only
else: else:
@ -18,6 +20,7 @@ else:
class Code(object): class Code(object):
""" wrapper around Python code objects """ """ wrapper around Python code objects """
def __init__(self, rawcode): def __init__(self, rawcode):
if not hasattr(rawcode, "co_filename"): if not hasattr(rawcode, "co_filename"):
rawcode = getrawcode(rawcode) rawcode = getrawcode(rawcode)
@ -26,7 +29,7 @@ class Code(object):
self.firstlineno = rawcode.co_firstlineno - 1 self.firstlineno = rawcode.co_firstlineno - 1
self.name = rawcode.co_name self.name = rawcode.co_name
except AttributeError: except AttributeError:
raise TypeError("not a code object: %r" %(rawcode,)) raise TypeError("not a code object: %r" % (rawcode,))
self.raw = rawcode self.raw = rawcode
def __eq__(self, other): def __eq__(self, other):
@ -82,6 +85,7 @@ class Code(object):
argcount += raw.co_flags & CO_VARKEYWORDS argcount += raw.co_flags & CO_VARKEYWORDS
return raw.co_varnames[:argcount] return raw.co_varnames[:argcount]
class Frame(object): class Frame(object):
"""Wrapper around a Python frame holding f_locals and f_globals """Wrapper around a Python frame holding f_locals and f_globals
in which expressions can be evaluated.""" in which expressions can be evaluated."""
@ -119,7 +123,7 @@ class Frame(object):
""" """
f_locals = self.f_locals.copy() f_locals = self.f_locals.copy()
f_locals.update(vars) f_locals.update(vars)
py.builtin.exec_(code, self.f_globals, f_locals ) py.builtin.exec_(code, self.f_globals, f_locals)
def repr(self, object): def repr(self, object):
""" return a 'safe' (non-recursive, one-line) string repr for 'object' """ return a 'safe' (non-recursive, one-line) string repr for 'object'
@ -143,6 +147,7 @@ class Frame(object):
pass # this can occur when using Psyco pass # this can occur when using Psyco
return retval return retval
class TracebackEntry(object): class TracebackEntry(object):
""" a single entry in a traceback """ """ a single entry in a traceback """
@ -168,7 +173,7 @@ class TracebackEntry(object):
return self.lineno - self.frame.code.firstlineno return self.lineno - self.frame.code.firstlineno
def __repr__(self): def __repr__(self):
return "<TracebackEntry %s:%d>" %(self.frame.code.path, self.lineno+1) return "<TracebackEntry %s:%d>" % (self.frame.code.path, self.lineno + 1)
@property @property
def statement(self): def statement(self):
@ -232,7 +237,7 @@ class TracebackEntry(object):
except KeyError: except KeyError:
return False return False
if py.builtin.callable(tbh): if callable(tbh):
return tbh(None if self._excinfo is None else self._excinfo()) return tbh(None if self._excinfo is None else self._excinfo())
else: else:
return tbh return tbh
@ -247,19 +252,21 @@ class TracebackEntry(object):
line = str(self.statement).lstrip() line = str(self.statement).lstrip()
except KeyboardInterrupt: except KeyboardInterrupt:
raise raise
except: except: # noqa
line = "???" line = "???"
return " File %r:%d in %s\n %s\n" %(fn, self.lineno+1, name, line) return " File %r:%d in %s\n %s\n" % (fn, self.lineno + 1, name, line)
def name(self): def name(self):
return self.frame.code.raw.co_name return self.frame.code.raw.co_name
name = property(name, None, None, "co_name of underlaying code") name = property(name, None, None, "co_name of underlaying code")
class Traceback(list): class Traceback(list):
""" Traceback objects encapsulate and offer higher level """ Traceback objects encapsulate and offer higher level
access to Traceback entries. access to Traceback entries.
""" """
Entry = TracebackEntry Entry = TracebackEntry
def __init__(self, tb, excinfo=None): def __init__(self, tb, excinfo=None):
""" initialize from given python traceback object and ExceptionInfo """ """ initialize from given python traceback object and ExceptionInfo """
self._excinfo = excinfo self._excinfo = excinfo
@ -315,7 +322,7 @@ class Traceback(list):
""" return last non-hidden traceback entry that lead """ return last non-hidden traceback entry that lead
to the exception of a traceback. to the exception of a traceback.
""" """
for i in range(-1, -len(self)-1, -1): for i in range(-1, -len(self) - 1, -1):
entry = self[i] entry = self[i]
if not entry.ishidden(): if not entry.ishidden():
return entry return entry
@ -330,25 +337,26 @@ class Traceback(list):
# id for the code.raw is needed to work around # id for the code.raw is needed to work around
# the strange metaprogramming in the decorator lib from pypi # the strange metaprogramming in the decorator lib from pypi
# which generates code objects that have hash/value equality # which generates code objects that have hash/value equality
#XXX needs a test # XXX needs a test
key = entry.frame.code.path, id(entry.frame.code.raw), entry.lineno key = entry.frame.code.path, id(entry.frame.code.raw), entry.lineno
#print "checking for recursion at", key # print "checking for recursion at", key
l = cache.setdefault(key, []) values = cache.setdefault(key, [])
if l: if values:
f = entry.frame f = entry.frame
loc = f.f_locals loc = f.f_locals
for otherloc in l: for otherloc in values:
if f.is_true(f.eval(co_equal, if f.is_true(f.eval(co_equal,
__recursioncache_locals_1=loc, __recursioncache_locals_1=loc,
__recursioncache_locals_2=otherloc)): __recursioncache_locals_2=otherloc)):
return i return i
l.append(entry.frame.f_locals) values.append(entry.frame.f_locals)
return None return None
co_equal = compile('__recursioncache_locals_1 == __recursioncache_locals_2', co_equal = compile('__recursioncache_locals_1 == __recursioncache_locals_2',
'?', 'eval') '?', 'eval')
class ExceptionInfo(object): class ExceptionInfo(object):
""" wraps sys.exc_info() objects and offers """ wraps sys.exc_info() objects and offers
help for navigating the traceback. help for navigating the traceback.
@ -405,7 +413,7 @@ class ExceptionInfo(object):
exconly = self.exconly(tryshort=True) exconly = self.exconly(tryshort=True)
entry = self.traceback.getcrashentry() entry = self.traceback.getcrashentry()
path, lineno = entry.frame.code.raw.co_filename, entry.lineno path, lineno = entry.frame.code.raw.co_filename, entry.lineno
return ReprFileLocation(path, lineno+1, exconly) return ReprFileLocation(path, lineno + 1, exconly)
def getrepr(self, showlocals=False, style="long", def getrepr(self, showlocals=False, style="long",
abspath=False, tbfilter=True, funcargs=False): abspath=False, tbfilter=True, funcargs=False):
@ -418,7 +426,7 @@ class ExceptionInfo(object):
""" """
if style == 'native': if style == 'native':
return ReprExceptionInfo(ReprTracebackNative( return ReprExceptionInfo(ReprTracebackNative(
py.std.traceback.format_exception( traceback.format_exception(
self.type, self.type,
self.value, self.value,
self.traceback[0]._rawentry, self.traceback[0]._rawentry,
@ -452,32 +460,32 @@ class ExceptionInfo(object):
return True return True
@attr.s
class FormattedExcinfo(object): class FormattedExcinfo(object):
""" presenting information about failing Functions and Generators. """ """ presenting information about failing Functions and Generators. """
# for traceback entries # for traceback entries
flow_marker = ">" flow_marker = ">"
fail_marker = "E" fail_marker = "E"
def __init__(self, showlocals=False, style="long", abspath=True, tbfilter=True, funcargs=False): showlocals = attr.ib(default=False)
self.showlocals = showlocals style = attr.ib(default="long")
self.style = style abspath = attr.ib(default=True)
self.tbfilter = tbfilter tbfilter = attr.ib(default=True)
self.funcargs = funcargs funcargs = attr.ib(default=False)
self.abspath = abspath astcache = attr.ib(default=attr.Factory(dict), init=False, repr=False)
self.astcache = {}
def _getindent(self, source): def _getindent(self, source):
# figure out indent for given source # figure out indent for given source
try: try:
s = str(source.getstatement(len(source)-1)) s = str(source.getstatement(len(source) - 1))
except KeyboardInterrupt: except KeyboardInterrupt:
raise raise
except: except: # noqa
try: try:
s = str(source[-1]) s = str(source[-1])
except KeyboardInterrupt: except KeyboardInterrupt:
raise raise
except: except: # noqa
return 0 return 0
return 4 + (len(s) - len(s.lstrip())) return 4 + (len(s) - len(s.lstrip()))
@ -513,7 +521,7 @@ class FormattedExcinfo(object):
for line in source.lines[:line_index]: for line in source.lines[:line_index]:
lines.append(space_prefix + line) lines.append(space_prefix + line)
lines.append(self.flow_marker + " " + source.lines[line_index]) lines.append(self.flow_marker + " " + source.lines[line_index])
for line in source.lines[line_index+1:]: for line in source.lines[line_index + 1:]:
lines.append(space_prefix + line) lines.append(space_prefix + line)
if excinfo is not None: if excinfo is not None:
indent = 4 if short else self._getindent(source) indent = 4 if short else self._getindent(source)
@ -546,13 +554,13 @@ class FormattedExcinfo(object):
# _repr() function, which is only reprlib.Repr in # _repr() function, which is only reprlib.Repr in
# disguise, so is very configurable. # disguise, so is very configurable.
str_repr = self._saferepr(value) str_repr = self._saferepr(value)
#if len(str_repr) < 70 or not isinstance(value, # if len(str_repr) < 70 or not isinstance(value,
# (list, tuple, dict)): # (list, tuple, dict)):
lines.append("%-10s = %s" %(name, str_repr)) lines.append("%-10s = %s" % (name, str_repr))
#else: # else:
# self._line("%-10s =\\" % (name,)) # self._line("%-10s =\\" % (name,))
# # XXX # # XXX
# py.std.pprint.pprint(value, stream=self.excinfowriter) # pprint.pprint(value, stream=self.excinfowriter)
return ReprLocals(lines) return ReprLocals(lines)
def repr_traceback_entry(self, entry, excinfo=None): def repr_traceback_entry(self, entry, excinfo=None):
@ -575,11 +583,11 @@ class FormattedExcinfo(object):
s = self.get_source(source, line_index, excinfo, short=short) s = self.get_source(source, line_index, excinfo, short=short)
lines.extend(s) lines.extend(s)
if short: if short:
message = "in %s" %(entry.name) message = "in %s" % (entry.name)
else: else:
message = excinfo and excinfo.typename or "" message = excinfo and excinfo.typename or ""
path = self._makepath(entry.path) path = self._makepath(entry.path)
filelocrepr = ReprFileLocation(path, entry.lineno+1, message) filelocrepr = ReprFileLocation(path, entry.lineno + 1, message)
localsrepr = None localsrepr = None
if not short: if not short:
localsrepr = self.repr_locals(entry.locals) localsrepr = self.repr_locals(entry.locals)
@ -665,7 +673,7 @@ class FormattedExcinfo(object):
else: else:
# fallback to native repr if the exception doesn't have a traceback: # fallback to native repr if the exception doesn't have a traceback:
# ExceptionInfo objects require a full traceback to work # ExceptionInfo objects require a full traceback to work
reprtraceback = ReprTracebackNative(py.std.traceback.format_exception(type(e), e, None)) reprtraceback = ReprTracebackNative(traceback.format_exception(type(e), e, None))
reprcrash = None reprcrash = None
repr_chain += [(reprtraceback, reprcrash, descr)] repr_chain += [(reprtraceback, reprcrash, descr)]
@ -673,7 +681,7 @@ class FormattedExcinfo(object):
e = e.__cause__ e = e.__cause__
excinfo = ExceptionInfo((type(e), e, e.__traceback__)) if e.__traceback__ else None excinfo = ExceptionInfo((type(e), e, e.__traceback__)) if e.__traceback__ else None
descr = 'The above exception was the direct cause of the following exception:' descr = 'The above exception was the direct cause of the following exception:'
elif e.__context__ is not None: elif (e.__context__ is not None and not e.__suppress_context__):
e = e.__context__ e = e.__context__
excinfo = ExceptionInfo((type(e), e, e.__traceback__)) if e.__traceback__ else None excinfo = ExceptionInfo((type(e), e, e.__traceback__)) if e.__traceback__ else None
descr = 'During handling of the above exception, another exception occurred:' descr = 'During handling of the above exception, another exception occurred:'
@ -699,7 +707,7 @@ class TerminalRepr(object):
return io.getvalue().strip() return io.getvalue().strip()
def __repr__(self): def __repr__(self):
return "<%s instance at %0x>" %(self.__class__, id(self)) return "<%s instance at %0x>" % (self.__class__, id(self))
class ExceptionRepr(TerminalRepr): class ExceptionRepr(TerminalRepr):
@ -743,6 +751,7 @@ class ReprExceptionInfo(ExceptionRepr):
self.reprtraceback.toterminal(tw) self.reprtraceback.toterminal(tw)
super(ReprExceptionInfo, self).toterminal(tw) super(ReprExceptionInfo, self).toterminal(tw)
class ReprTraceback(TerminalRepr): class ReprTraceback(TerminalRepr):
entrysep = "_ " entrysep = "_ "
@ -758,7 +767,7 @@ class ReprTraceback(TerminalRepr):
tw.line("") tw.line("")
entry.toterminal(tw) entry.toterminal(tw)
if i < len(self.reprentries) - 1: if i < len(self.reprentries) - 1:
next_entry = self.reprentries[i+1] next_entry = self.reprentries[i + 1]
if entry.style == "long" or \ if entry.style == "long" or \
entry.style == "short" and next_entry.style == "long": entry.style == "short" and next_entry.style == "long":
tw.sep(self.entrysep) tw.sep(self.entrysep)
@ -766,12 +775,14 @@ class ReprTraceback(TerminalRepr):
if self.extraline: if self.extraline:
tw.line(self.extraline) tw.line(self.extraline)
class ReprTracebackNative(ReprTraceback): class ReprTracebackNative(ReprTraceback):
def __init__(self, tblines): def __init__(self, tblines):
self.style = "native" self.style = "native"
self.reprentries = [ReprEntryNative(tblines)] self.reprentries = [ReprEntryNative(tblines)]
self.extraline = None self.extraline = None
class ReprEntryNative(TerminalRepr): class ReprEntryNative(TerminalRepr):
style = "native" style = "native"
@ -781,6 +792,7 @@ class ReprEntryNative(TerminalRepr):
def toterminal(self, tw): def toterminal(self, tw):
tw.write("".join(self.lines)) tw.write("".join(self.lines))
class ReprEntry(TerminalRepr): class ReprEntry(TerminalRepr):
localssep = "_ " localssep = "_ "
@ -797,7 +809,7 @@ class ReprEntry(TerminalRepr):
for line in self.lines: for line in self.lines:
red = line.startswith("E ") red = line.startswith("E ")
tw.line(line, bold=True, red=red) tw.line(line, bold=True, red=red)
#tw.line("") # tw.line("")
return return
if self.reprfuncargs: if self.reprfuncargs:
self.reprfuncargs.toterminal(tw) self.reprfuncargs.toterminal(tw)
@ -805,7 +817,7 @@ class ReprEntry(TerminalRepr):
red = line.startswith("E ") red = line.startswith("E ")
tw.line(line, bold=True, red=red) tw.line(line, bold=True, red=red)
if self.reprlocals: if self.reprlocals:
#tw.sep(self.localssep, "Locals") # tw.sep(self.localssep, "Locals")
tw.line("") tw.line("")
self.reprlocals.toterminal(tw) self.reprlocals.toterminal(tw)
if self.reprfileloc: if self.reprfileloc:
@ -818,6 +830,7 @@ class ReprEntry(TerminalRepr):
self.reprlocals, self.reprlocals,
self.reprfileloc) self.reprfileloc)
class ReprFileLocation(TerminalRepr): class ReprFileLocation(TerminalRepr):
def __init__(self, path, lineno, message): def __init__(self, path, lineno, message):
self.path = str(path) self.path = str(path)
@ -834,6 +847,7 @@ class ReprFileLocation(TerminalRepr):
tw.write(self.path, bold=True, red=True) tw.write(self.path, bold=True, red=True)
tw.line(":%s: %s" % (self.lineno, msg)) tw.line(":%s: %s" % (self.lineno, msg))
class ReprLocals(TerminalRepr): class ReprLocals(TerminalRepr):
def __init__(self, lines): def __init__(self, lines):
self.lines = lines self.lines = lines
@ -842,6 +856,7 @@ class ReprLocals(TerminalRepr):
for line in self.lines: for line in self.lines:
tw.line(line) tw.line(line)
class ReprFuncArgs(TerminalRepr): class ReprFuncArgs(TerminalRepr):
def __init__(self, args): def __init__(self, args):
self.args = args self.args = args
@ -850,7 +865,7 @@ class ReprFuncArgs(TerminalRepr):
if self.args: if self.args:
linesofar = "" linesofar = ""
for name, value in self.args: for name, value in self.args:
ns = "%s = %s" %(name, value) ns = "%s = %s" % (safe_str(name), safe_str(value))
if len(ns) + len(linesofar) + 2 > tw.fullwidth: if len(ns) + len(linesofar) + 2 > tw.fullwidth:
if linesofar: if linesofar:
tw.line(linesofar) tw.line(linesofar)
@ -875,7 +890,7 @@ def getrawcode(obj, trycall=True):
obj = getattr(obj, 'f_code', obj) obj = getattr(obj, 'f_code', obj)
obj = getattr(obj, '__code__', obj) obj = getattr(obj, '__code__', obj)
if trycall and not hasattr(obj, 'co_firstlineno'): if trycall and not hasattr(obj, 'co_firstlineno'):
if hasattr(obj, '__call__') and not py.std.inspect.isclass(obj): if hasattr(obj, '__call__') and not inspect.isclass(obj):
x = getrawcode(obj.__call__, trycall=False) x = getrawcode(obj.__call__, trycall=False)
if hasattr(x, 'co_firstlineno'): if hasattr(x, 'co_firstlineno'):
return x return x

View File

@ -1,17 +1,16 @@
from __future__ import absolute_import, division, generators, print_function from __future__ import absolute_import, division, generators, print_function
import ast
from ast import PyCF_ONLY_AST as _AST_FLAG
from bisect import bisect_right from bisect import bisect_right
import linecache
import sys import sys
import inspect, tokenize import six
import inspect
import tokenize
import py import py
cpy_compile = compile
try: cpy_compile = compile
import _ast
from _ast import PyCF_ONLY_AST as _AST_FLAG
except ImportError:
_AST_FLAG = 0
_ast = None
class Source(object): class Source(object):
@ -19,6 +18,7 @@ class Source(object):
possibly deindenting it. possibly deindenting it.
""" """
_compilecounter = 0 _compilecounter = 0
def __init__(self, *parts, **kwargs): def __init__(self, *parts, **kwargs):
self.lines = lines = [] self.lines = lines = []
de = kwargs.get('deindent', True) de = kwargs.get('deindent', True)
@ -26,11 +26,11 @@ class Source(object):
for part in parts: for part in parts:
if not part: if not part:
partlines = [] partlines = []
if isinstance(part, Source): elif isinstance(part, Source):
partlines = part.lines partlines = part.lines
elif isinstance(part, (tuple, list)): elif isinstance(part, (tuple, list)):
partlines = [x.rstrip("\n") for x in part] partlines = [x.rstrip("\n") for x in part]
elif isinstance(part, py.builtin._basestring): elif isinstance(part, six.string_types):
partlines = part.split('\n') partlines = part.split('\n')
if rstrip: if rstrip:
while partlines: while partlines:
@ -73,7 +73,7 @@ class Source(object):
start, end = 0, len(self) start, end = 0, len(self)
while start < end and not self.lines[start].strip(): while start < end and not self.lines[start].strip():
start += 1 start += 1
while end > start and not self.lines[end-1].strip(): while end > start and not self.lines[end - 1].strip():
end -= 1 end -= 1
source = Source() source = Source()
source.lines[:] = self.lines[start:end] source.lines[:] = self.lines[start:end]
@ -86,7 +86,7 @@ class Source(object):
before = Source(before) before = Source(before)
after = Source(after) after = Source(after)
newsource = Source() newsource = Source()
lines = [ (indent + line) for line in self.lines] lines = [(indent + line) for line in self.lines]
newsource.lines = before.lines + lines + after.lines newsource.lines = before.lines + lines + after.lines
return newsource return newsource
@ -95,17 +95,17 @@ class Source(object):
all lines indented by the given indent-string. all lines indented by the given indent-string.
""" """
newsource = Source() newsource = Source()
newsource.lines = [(indent+line) for line in self.lines] newsource.lines = [(indent + line) for line in self.lines]
return newsource return newsource
def getstatement(self, lineno, assertion=False): def getstatement(self, lineno):
""" return Source statement which contains the """ return Source statement which contains the
given linenumber (counted from 0). given linenumber (counted from 0).
""" """
start, end = self.getstatementrange(lineno, assertion) start, end = self.getstatementrange(lineno)
return self[start:end] return self[start:end]
def getstatementrange(self, lineno, assertion=False): def getstatementrange(self, lineno):
""" return (start, end) tuple which spans the minimal """ return (start, end) tuple which spans the minimal
statement region which containing the given lineno. statement region which containing the given lineno.
""" """
@ -131,20 +131,15 @@ class Source(object):
""" return True if source is parseable, heuristically """ return True if source is parseable, heuristically
deindenting it by default. deindenting it by default.
""" """
try: from parser import suite as syntax_checker
import parser
except ImportError:
syntax_checker = lambda x: compile(x, 'asd', 'exec')
else:
syntax_checker = parser.suite
if deindent: if deindent:
source = str(self.deindent()) source = str(self.deindent())
else: else:
source = str(self) source = str(self)
try: try:
#compile(source+'\n', "x", "exec") # compile(source+'\n', "x", "exec")
syntax_checker(source+'\n') syntax_checker(source + '\n')
except KeyboardInterrupt: except KeyboardInterrupt:
raise raise
except Exception: except Exception:
@ -165,7 +160,7 @@ class Source(object):
if not filename or py.path.local(filename).check(file=0): if not filename or py.path.local(filename).check(file=0):
if _genframe is None: if _genframe is None:
_genframe = sys._getframe(1) # the caller _genframe = sys._getframe(1) # the caller
fn,lineno = _genframe.f_code.co_filename, _genframe.f_lineno fn, lineno = _genframe.f_code.co_filename, _genframe.f_lineno
base = "<%d-codegen " % self._compilecounter base = "<%d-codegen " % self._compilecounter
self.__class__._compilecounter += 1 self.__class__._compilecounter += 1
if not filename: if not filename:
@ -180,7 +175,7 @@ class Source(object):
# re-represent syntax errors from parsing python strings # re-represent syntax errors from parsing python strings
msglines = self.lines[:ex.lineno] msglines = self.lines[:ex.lineno]
if ex.offset: if ex.offset:
msglines.append(" "*ex.offset + '^') msglines.append(" " * ex.offset + '^')
msglines.append("(code was compiled probably from here: %s)" % filename) msglines.append("(code was compiled probably from here: %s)" % filename)
newex = SyntaxError('\n'.join(msglines)) newex = SyntaxError('\n'.join(msglines))
newex.offset = ex.offset newex.offset = ex.offset
@ -191,21 +186,21 @@ class Source(object):
if flag & _AST_FLAG: if flag & _AST_FLAG:
return co return co
lines = [(x + "\n") for x in self.lines] lines = [(x + "\n") for x in self.lines]
py.std.linecache.cache[filename] = (1, None, lines, filename) linecache.cache[filename] = (1, None, lines, filename)
return co return co
# #
# public API shortcut functions # public API shortcut functions
# #
def compile_(source, filename=None, mode='exec', flags=
generators.compiler_flag, dont_inherit=0): def compile_(source, filename=None, mode='exec', flags=generators.compiler_flag, dont_inherit=0):
""" compile the given source to a raw code object, """ compile the given source to a raw code object,
and maintain an internal cache which allows later and maintain an internal cache which allows later
retrieval of the source code for the code object retrieval of the source code for the code object
and any recursively created code objects. and any recursively created code objects.
""" """
if _ast is not None and isinstance(source, _ast.AST): if isinstance(source, ast.AST):
# XXX should Source support having AST? # XXX should Source support having AST?
return cpy_compile(source, filename, mode, flags, dont_inherit) return cpy_compile(source, filename, mode, flags, dont_inherit)
_genframe = sys._getframe(1) # the caller _genframe = sys._getframe(1) # the caller
@ -218,13 +213,12 @@ def getfslineno(obj):
""" Return source location (path, lineno) for the given object. """ Return source location (path, lineno) for the given object.
If the source cannot be determined return ("", -1) If the source cannot be determined return ("", -1)
""" """
import _pytest._code from .code import Code
try: try:
code = _pytest._code.Code(obj) code = Code(obj)
except TypeError: except TypeError:
try: try:
fn = (py.std.inspect.getsourcefile(obj) or fn = inspect.getsourcefile(obj) or inspect.getfile(obj)
py.std.inspect.getfile(obj))
except TypeError: except TypeError:
return "", -1 return "", -1
@ -245,12 +239,13 @@ def getfslineno(obj):
# helper functions # helper functions
# #
def findsource(obj): def findsource(obj):
try: try:
sourcelines, lineno = py.std.inspect.findsource(obj) sourcelines, lineno = inspect.findsource(obj)
except py.builtin._sysex: except py.builtin._sysex:
raise raise
except: except: # noqa
return None, -1 return None, -1
source = Source() source = Source()
source.lines = [line.rstrip() for line in sourcelines] source.lines = [line.rstrip() for line in sourcelines]
@ -258,8 +253,8 @@ def findsource(obj):
def getsource(obj, **kwargs): def getsource(obj, **kwargs):
import _pytest._code from .code import getrawcode
obj = _pytest._code.getrawcode(obj) obj = getrawcode(obj)
try: try:
strsrc = inspect.getsource(obj) strsrc = inspect.getsource(obj)
except IndentationError: except IndentationError:
@ -274,7 +269,7 @@ def deindent(lines, offset=None):
line = line.expandtabs() line = line.expandtabs()
s = line.lstrip() s = line.lstrip()
if s: if s:
offset = len(line)-len(s) offset = len(line) - len(s)
break break
else: else:
offset = 0 offset = 0
@ -285,8 +280,6 @@ def deindent(lines, offset=None):
def readline_generator(lines): def readline_generator(lines):
for line in lines: for line in lines:
yield line + '\n' yield line + '\n'
while True:
yield ''
it = readline_generator(lines) it = readline_generator(lines)
@ -315,35 +308,30 @@ def get_statement_startend2(lineno, node):
import ast import ast
# flatten all statements and except handlers into one lineno-list # flatten all statements and except handlers into one lineno-list
# AST's line numbers start indexing at 1 # AST's line numbers start indexing at 1
l = [] values = []
for x in ast.walk(node): for x in ast.walk(node):
if isinstance(x, _ast.stmt) or isinstance(x, _ast.ExceptHandler): if isinstance(x, (ast.stmt, ast.ExceptHandler)):
l.append(x.lineno - 1) values.append(x.lineno - 1)
for name in "finalbody", "orelse": for name in ("finalbody", "orelse"):
val = getattr(x, name, None) val = getattr(x, name, None)
if val: if val:
# treat the finally/orelse part as its own statement # treat the finally/orelse part as its own statement
l.append(val[0].lineno - 1 - 1) values.append(val[0].lineno - 1 - 1)
l.sort() values.sort()
insert_index = bisect_right(l, lineno) insert_index = bisect_right(values, lineno)
start = l[insert_index - 1] start = values[insert_index - 1]
if insert_index >= len(l): if insert_index >= len(values):
end = None end = None
else: else:
end = l[insert_index] end = values[insert_index]
return start, end return start, end
def getstatementrange_ast(lineno, source, assertion=False, astnode=None): def getstatementrange_ast(lineno, source, assertion=False, astnode=None):
if astnode is None: if astnode is None:
content = str(source) content = str(source)
if sys.version_info < (2,7):
content += "\n"
try:
astnode = compile(content, "source", "exec", 1024) # 1024 for AST astnode = compile(content, "source", "exec", 1024) # 1024 for AST
except ValueError:
start, end = getstatementrange_old(lineno, source, assertion)
return None, start, end
start, end = get_statement_startend2(lineno, astnode) start, end = get_statement_startend2(lineno, astnode)
# we need to correct the end: # we need to correct the end:
# - ast-parsing strips comments # - ast-parsing strips comments
@ -375,40 +363,3 @@ def getstatementrange_ast(lineno, source, assertion=False, astnode=None):
else: else:
break break
return astnode, start, end return astnode, start, end
def getstatementrange_old(lineno, source, assertion=False):
""" return (start, end) tuple which spans the minimal
statement region which containing the given lineno.
raise an IndexError if no such statementrange can be found.
"""
# XXX this logic is only used on python2.4 and below
# 1. find the start of the statement
from codeop import compile_command
for start in range(lineno, -1, -1):
if assertion:
line = source.lines[start]
# the following lines are not fully tested, change with care
if 'super' in line and 'self' in line and '__init__' in line:
raise IndexError("likely a subclass")
if "assert" not in line and "raise" not in line:
continue
trylines = source.lines[start:lineno+1]
# quick hack to prepare parsing an indented line with
# compile_command() (which errors on "return" outside defs)
trylines.insert(0, 'def xxx():')
trysource = '\n '.join(trylines)
# ^ space here
try:
compile_command(trysource)
except (SyntaxError, OverflowError, ValueError):
continue
# 2. find the end of the statement
for end in range(lineno+1, len(source)+1):
trysource = source[start:end]
if trysource.isparseable():
return start, end
raise SyntaxError("no valid source range around line %d " % (lineno,))

View File

@ -1,11 +0,0 @@
"""
imports symbols from vendored "pluggy" if available, otherwise
falls back to importing "pluggy" from the default namespace.
"""
from __future__ import absolute_import, division, print_function
try:
from _pytest.vendored_packages.pluggy import * # noqa
from _pytest.vendored_packages.pluggy import __version__ # noqa
except ImportError:
from pluggy import * # noqa
from pluggy import __version__ # noqa

View File

@ -2,8 +2,8 @@
support for presenting detailed information in failing assertions. support for presenting detailed information in failing assertions.
""" """
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import py
import sys import sys
import six
from _pytest.assertion import util from _pytest.assertion import util
from _pytest.assertion import rewrite from _pytest.assertion import rewrite
@ -25,7 +25,6 @@ def pytest_addoption(parser):
expression information.""") expression information.""")
def register_assert_rewrite(*names): def register_assert_rewrite(*names):
"""Register one or more module names to be rewritten on import. """Register one or more module names to be rewritten on import.
@ -57,7 +56,7 @@ class DummyRewriteHook(object):
pass pass
class AssertionState: class AssertionState(object):
"""State for the assertion plugin.""" """State for the assertion plugin."""
def __init__(self, config, mode): def __init__(self, config, mode):
@ -68,10 +67,8 @@ class AssertionState:
def install_importhook(config): def install_importhook(config):
"""Try to install the rewrite hook, raise SystemError if it fails.""" """Try to install the rewrite hook, raise SystemError if it fails."""
# Both Jython and CPython 2.6.0 have AST bugs that make the # Jython has an AST bug that make the assertion rewriting hook malfunction.
# assertion rewriting hook malfunction. if (sys.platform.startswith('java')):
if (sys.platform.startswith('java') or
sys.version_info[:3] == (2, 6, 0)):
raise SystemError('rewrite not supported') raise SystemError('rewrite not supported')
config._assertstate = AssertionState(config, 'rewrite') config._assertstate = AssertionState(config, 'rewrite')
@ -127,7 +124,7 @@ def pytest_runtest_setup(item):
if new_expl: if new_expl:
new_expl = truncate.truncate_if_required(new_expl, item) new_expl = truncate.truncate_if_required(new_expl, item)
new_expl = [line.replace("\n", "\\n") for line in new_expl] new_expl = [line.replace("\n", "\\n") for line in new_expl]
res = py.builtin._totext("\n~").join(new_expl) res = six.text_type("\n~").join(new_expl)
if item.config.getvalue("assertmode") == "rewrite": if item.config.getvalue("assertmode") == "rewrite":
res = res.replace("%", "%%") res = res.replace("%", "%%")
return res return res

View File

@ -1,18 +1,20 @@
"""Rewrite assertion AST to produce nice error messages""" """Rewrite assertion AST to produce nice error messages"""
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import ast import ast
import _ast
import errno import errno
import itertools import itertools
import imp import imp
import marshal import marshal
import os import os
import re import re
import six
import struct import struct
import sys import sys
import types import types
import atomicwrites
import py import py
from _pytest.assertion import util from _pytest.assertion import util
@ -33,13 +35,13 @@ else:
PYC_EXT = ".py" + (__debug__ and "c" or "o") PYC_EXT = ".py" + (__debug__ and "c" or "o")
PYC_TAIL = "." + PYTEST_TAG + PYC_EXT PYC_TAIL = "." + PYTEST_TAG + PYC_EXT
REWRITE_NEWLINES = sys.version_info[:2] != (2, 7) and sys.version_info < (3, 2)
ASCII_IS_DEFAULT_ENCODING = sys.version_info[0] < 3 ASCII_IS_DEFAULT_ENCODING = sys.version_info[0] < 3
if sys.version_info >= (3,5): if sys.version_info >= (3, 5):
ast_Call = ast.Call ast_Call = ast.Call
else: else:
ast_Call = lambda a,b,c: ast.Call(a, b, c, None, None) def ast_Call(a, b, c):
return ast.Call(a, b, c, None, None)
class AssertionRewritingHook(object): class AssertionRewritingHook(object):
@ -140,7 +142,7 @@ class AssertionRewritingHook(object):
# Probably a SyntaxError in the test. # Probably a SyntaxError in the test.
return None return None
if write: if write:
_make_rewritten_pyc(state, source_stat, pyc, co) _write_pyc(state, co, source_stat, pyc)
else: else:
state.trace("found cached rewritten pyc for %r" % (fn,)) state.trace("found cached rewritten pyc for %r" % (fn,))
self.modules[name] = co, pyc self.modules[name] = co, pyc
@ -167,29 +169,31 @@ class AssertionRewritingHook(object):
return True return True
for marked in self._must_rewrite: for marked in self._must_rewrite:
if name.startswith(marked): if name == marked or name.startswith(marked + '.'):
state.trace("matched marked file %r (from %r)" % (name, marked)) state.trace("matched marked file %r (from %r)" % (name, marked))
return True return True
return False return False
def mark_rewrite(self, *names): def mark_rewrite(self, *names):
"""Mark import names as needing to be re-written. """Mark import names as needing to be rewritten.
The named module or package as well as any nested modules will The named module or package as well as any nested modules will
be re-written on import. be rewritten on import.
""" """
already_imported = set(names).intersection(set(sys.modules)) already_imported = (set(names)
if already_imported: .intersection(sys.modules)
.difference(self._rewritten_names))
for name in already_imported: for name in already_imported:
if name not in self._rewritten_names: if not AssertionRewriter.is_rewrite_disabled(
sys.modules[name].__doc__ or ""):
self._warn_already_imported(name) self._warn_already_imported(name)
self._must_rewrite.update(names) self._must_rewrite.update(names)
def _warn_already_imported(self, name): def _warn_already_imported(self, name):
self.config.warn( self.config.warn(
'P1', 'P1',
'Module already imported so can not be re-written: %s' % name) 'Module already imported so cannot be rewritten: %s' % name)
def load_module(self, name): def load_module(self, name):
# If there is an existing module object named 'fullname' in # If there is an existing module object named 'fullname' in
@ -209,14 +213,12 @@ class AssertionRewritingHook(object):
mod.__cached__ = pyc mod.__cached__ = pyc
mod.__loader__ = self mod.__loader__ = self
py.builtin.exec_(co, mod.__dict__) py.builtin.exec_(co, mod.__dict__)
except: except: # noqa
if name in sys.modules: if name in sys.modules:
del sys.modules[name] del sys.modules[name]
raise raise
return sys.modules[name] return sys.modules[name]
def is_package(self, name): def is_package(self, name):
try: try:
fd, fn, desc = imp.find_module(name) fd, fn, desc = imp.find_module(name)
@ -258,22 +260,21 @@ def _write_pyc(state, co, source_stat, pyc):
# sometime to be able to use imp.load_compiled to load them. (See # sometime to be able to use imp.load_compiled to load them. (See
# the comment in load_module above.) # the comment in load_module above.)
try: try:
fp = open(pyc, "wb") with atomicwrites.atomic_write(pyc, mode="wb", overwrite=True) as fp:
except IOError:
err = sys.exc_info()[1].errno
state.trace("error writing pyc file at %s: errno=%s" %(pyc, err))
# we ignore any failure to write the cache file
# there are many reasons, permission-denied, __pycache__ being a
# file etc.
return False
try:
fp.write(imp.get_magic()) fp.write(imp.get_magic())
mtime = int(source_stat.mtime) mtime = int(source_stat.mtime)
size = source_stat.size & 0xFFFFFFFF size = source_stat.size & 0xFFFFFFFF
fp.write(struct.pack("<ll", mtime, size)) fp.write(struct.pack("<ll", mtime, size))
if six.PY2:
marshal.dump(co, fp.file)
else:
marshal.dump(co, fp) marshal.dump(co, fp)
finally: except EnvironmentError as e:
fp.close() state.trace("error writing pyc file at %s: errno=%s" % (pyc, e.errno))
# we ignore any failure to write the cache file
# there are many reasons, permission-denied, __pycache__ being a
# file etc.
return False
return True return True
@ -283,6 +284,7 @@ N = "\n".encode("utf-8")
cookie_re = re.compile(r"^[ \t\f]*#.*coding[:=][ \t]*[-\w.]+") cookie_re = re.compile(r"^[ \t\f]*#.*coding[:=][ \t]*[-\w.]+")
BOM_UTF8 = '\xef\xbb\xbf' BOM_UTF8 = '\xef\xbb\xbf'
def _rewrite_test(config, fn): def _rewrite_test(config, fn):
"""Try to read and rewrite *fn* and return the code object.""" """Try to read and rewrite *fn* and return the code object."""
state = config._assertstate state = config._assertstate
@ -320,10 +322,6 @@ def _rewrite_test(config, fn):
return None, None return None, None
finally: finally:
del state._indecode del state._indecode
# On Python versions which are not 2.7 and less than or equal to 3.1, the
# parser expects *nix newlines.
if REWRITE_NEWLINES:
source = source.replace(RN, N) + N
try: try:
tree = ast.parse(source) tree = ast.parse(source)
except SyntaxError: except SyntaxError:
@ -340,18 +338,6 @@ def _rewrite_test(config, fn):
return None, None return None, None
return stat, co return stat, co
def _make_rewritten_pyc(state, source_stat, pyc, co):
"""Try to dump rewritten code to *pyc*."""
if sys.platform.startswith("win"):
# Windows grants exclusive access to open files and doesn't have atomic
# rename, so just write into the final file.
_write_pyc(state, co, source_stat, pyc)
else:
# When not on windows, assume rename is atomic. Dump the code object
# into a file specific to this process and atomically replace it.
proc_pyc = pyc + "." + str(os.getpid())
if _write_pyc(state, co, source_stat, proc_pyc):
os.rename(proc_pyc, pyc)
def _read_pyc(source, pyc, trace=lambda x: None): def _read_pyc(source, pyc, trace=lambda x: None):
"""Possibly read a pytest pyc containing rewritten code. """Possibly read a pytest pyc containing rewritten code.
@ -403,15 +389,16 @@ def _saferepr(obj):
""" """
repr = py.io.saferepr(obj) repr = py.io.saferepr(obj)
if py.builtin._istext(repr): if isinstance(repr, six.text_type):
t = py.builtin.text t = six.text_type
else: else:
t = py.builtin.bytes t = six.binary_type
return repr.replace(t("\n"), t("\\n")) return repr.replace(t("\n"), t("\\n"))
from _pytest.assertion.util import format_explanation as _format_explanation # noqa from _pytest.assertion.util import format_explanation as _format_explanation # noqa
def _format_assertmsg(obj): def _format_assertmsg(obj):
"""Format the custom assertion message given. """Format the custom assertion message given.
@ -424,32 +411,35 @@ def _format_assertmsg(obj):
# contains a newline it gets escaped, however if an object has a # contains a newline it gets escaped, however if an object has a
# .__repr__() which contains newlines it does not get escaped. # .__repr__() which contains newlines it does not get escaped.
# However in either case we want to preserve the newline. # However in either case we want to preserve the newline.
if py.builtin._istext(obj) or py.builtin._isbytes(obj): if isinstance(obj, six.text_type) or isinstance(obj, six.binary_type):
s = obj s = obj
is_repr = False is_repr = False
else: else:
s = py.io.saferepr(obj) s = py.io.saferepr(obj)
is_repr = True is_repr = True
if py.builtin._istext(s): if isinstance(s, six.text_type):
t = py.builtin.text t = six.text_type
else: else:
t = py.builtin.bytes t = six.binary_type
s = s.replace(t("\n"), t("\n~")).replace(t("%"), t("%%")) s = s.replace(t("\n"), t("\n~")).replace(t("%"), t("%%"))
if is_repr: if is_repr:
s = s.replace(t("\\n"), t("\n~")) s = s.replace(t("\\n"), t("\n~"))
return s return s
def _should_repr_global_name(obj): def _should_repr_global_name(obj):
return not hasattr(obj, "__name__") and not py.builtin.callable(obj) return not hasattr(obj, "__name__") and not callable(obj)
def _format_boolop(explanations, is_or): def _format_boolop(explanations, is_or):
explanation = "(" + (is_or and " or " or " and ").join(explanations) + ")" explanation = "(" + (is_or and " or " or " and ").join(explanations) + ")"
if py.builtin._istext(explanation): if isinstance(explanation, six.text_type):
t = py.builtin.text t = six.text_type
else: else:
t = py.builtin.bytes t = six.binary_type
return explanation.replace(t('%'), t('%%')) return explanation.replace(t('%'), t('%%'))
def _call_reprcompare(ops, results, expls, each_obj): def _call_reprcompare(ops, results, expls, each_obj):
for i, res, expl in zip(range(len(ops)), results, expls): for i, res, expl in zip(range(len(ops)), results, expls):
try: try:
@ -527,7 +517,7 @@ class AssertionRewriter(ast.NodeVisitor):
"""Assertion rewriting implementation. """Assertion rewriting implementation.
The main entrypoint is to call .run() with an ast.Module instance, The main entrypoint is to call .run() with an ast.Module instance,
this will then find all the assert statements and re-write them to this will then find all the assert statements and rewrite them to
provide intermediate values and a detailed assertion error. See provide intermediate values and a detailed assertion error. See
http://pybites.blogspot.be/2011/07/behind-scenes-of-pytests-new-assertion.html http://pybites.blogspot.be/2011/07/behind-scenes-of-pytests-new-assertion.html
for an overview of how this works. for an overview of how this works.
@ -536,7 +526,7 @@ class AssertionRewriter(ast.NodeVisitor):
statements in an ast.Module and for each ast.Assert statement it statements in an ast.Module and for each ast.Assert statement it
finds call .visit() with it. Then .visit_Assert() takes over and finds call .visit() with it. Then .visit_Assert() takes over and
is responsible for creating new ast statements to replace the is responsible for creating new ast statements to replace the
original assert statement: it re-writes the test of an assertion original assert statement: it rewrites the test of an assertion
to provide intermediate values and replace it with an if statement to provide intermediate values and replace it with an if statement
which raises an assertion error with a detailed explanation in which raises an assertion error with a detailed explanation in
case the expression is false. case the expression is false.
@ -589,23 +579,26 @@ class AssertionRewriter(ast.NodeVisitor):
# docstrings and __future__ imports. # docstrings and __future__ imports.
aliases = [ast.alias(py.builtin.builtins.__name__, "@py_builtins"), aliases = [ast.alias(py.builtin.builtins.__name__, "@py_builtins"),
ast.alias("_pytest.assertion.rewrite", "@pytest_ar")] ast.alias("_pytest.assertion.rewrite", "@pytest_ar")]
expect_docstring = True doc = getattr(mod, "docstring", None)
expect_docstring = doc is None
if doc is not None and self.is_rewrite_disabled(doc):
return
pos = 0 pos = 0
lineno = 0 lineno = 1
for item in mod.body: for item in mod.body:
if (expect_docstring and isinstance(item, ast.Expr) and if (expect_docstring and isinstance(item, ast.Expr) and
isinstance(item.value, ast.Str)): isinstance(item.value, ast.Str)):
doc = item.value.s doc = item.value.s
if "PYTEST_DONT_REWRITE" in doc: if self.is_rewrite_disabled(doc):
# The module has disabled assertion rewriting.
return return
lineno += len(doc) - 1
expect_docstring = False expect_docstring = False
elif (not isinstance(item, ast.ImportFrom) or item.level > 0 or elif (not isinstance(item, ast.ImportFrom) or item.level > 0 or
item.module != "__future__"): item.module != "__future__"):
lineno = item.lineno lineno = item.lineno
break break
pos += 1 pos += 1
else:
lineno = item.lineno
imports = [ast.Import([alias], lineno=lineno, col_offset=0) imports = [ast.Import([alias], lineno=lineno, col_offset=0)
for alias in aliases] for alias in aliases]
mod.body[pos:pos] = imports mod.body[pos:pos] = imports
@ -631,6 +624,10 @@ class AssertionRewriter(ast.NodeVisitor):
not isinstance(field, ast.expr)): not isinstance(field, ast.expr)):
nodes.append(field) nodes.append(field)
@staticmethod
def is_rewrite_disabled(docstring):
return "PYTEST_DONT_REWRITE" in docstring
def variable(self): def variable(self):
"""Get a new variable.""" """Get a new variable."""
# Use a character invalid in python identifiers to avoid clashing. # Use a character invalid in python identifiers to avoid clashing.
@ -714,7 +711,7 @@ class AssertionRewriter(ast.NodeVisitor):
def visit_Assert(self, assert_): def visit_Assert(self, assert_):
"""Return the AST statements to replace the ast.Assert instance. """Return the AST statements to replace the ast.Assert instance.
This re-writes the test of an assertion to provide This rewrites the test of an assertion to provide
intermediate values and replace it with an if statement which intermediate values and replace it with an if statement which
raises an assertion error with a detailed explanation in case raises an assertion error with a detailed explanation in case
the expression is false. the expression is false.
@ -839,7 +836,7 @@ class AssertionRewriter(ast.NodeVisitor):
new_kwargs.append(ast.keyword(keyword.arg, res)) new_kwargs.append(ast.keyword(keyword.arg, res))
if keyword.arg: if keyword.arg:
arg_expls.append(keyword.arg + "=" + expl) arg_expls.append(keyword.arg + "=" + expl)
else: ## **args have `arg` keywords with an .arg of None else: # **args have `arg` keywords with an .arg of None
arg_expls.append("**" + expl) arg_expls.append("**" + expl)
expl = "%s(%s)" % (func_expl, ', '.join(arg_expls)) expl = "%s(%s)" % (func_expl, ', '.join(arg_expls))
@ -893,7 +890,6 @@ class AssertionRewriter(ast.NodeVisitor):
else: else:
visit_Call = visit_Call_legacy visit_Call = visit_Call_legacy
def visit_Attribute(self, attr): def visit_Attribute(self, attr):
if not isinstance(attr.ctx, ast.Load): if not isinstance(attr.ctx, ast.Load):
return self.generic_visit(attr) return self.generic_visit(attr)
@ -907,7 +903,7 @@ class AssertionRewriter(ast.NodeVisitor):
def visit_Compare(self, comp): def visit_Compare(self, comp):
self.push_format_context() self.push_format_context()
left_res, left_expl = self.visit(comp.left) left_res, left_expl = self.visit(comp.left)
if isinstance(comp.left, (_ast.Compare, _ast.BoolOp)): if isinstance(comp.left, (ast.Compare, ast.BoolOp)):
left_expl = "({0})".format(left_expl) left_expl = "({0})".format(left_expl)
res_variables = [self.variable() for i in range(len(comp.ops))] res_variables = [self.variable() for i in range(len(comp.ops))]
load_names = [ast.Name(v, ast.Load()) for v in res_variables] load_names = [ast.Name(v, ast.Load()) for v in res_variables]
@ -918,7 +914,7 @@ class AssertionRewriter(ast.NodeVisitor):
results = [left_res] results = [left_res]
for i, op, next_operand in it: for i, op, next_operand in it:
next_res, next_expl = self.visit(next_operand) next_res, next_expl = self.visit(next_operand)
if isinstance(next_operand, (_ast.Compare, _ast.BoolOp)): if isinstance(next_operand, (ast.Compare, ast.BoolOp)):
next_expl = "({0})".format(next_expl) next_expl = "({0})".format(next_expl)
results.append(next_res) results.append(next_res)
sym = binop_map[op.__class__] sym = binop_map[op.__class__]

View File

@ -7,7 +7,7 @@ Current default behaviour is to truncate assertion explanations at
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import os import os
import py import six
DEFAULT_MAX_LINES = 8 DEFAULT_MAX_LINES = 8
@ -74,8 +74,8 @@ def _truncate_explanation(input_lines, max_lines=None, max_chars=None):
msg += ' ({0} lines hidden)'.format(truncated_line_count) msg += ' ({0} lines hidden)'.format(truncated_line_count)
msg += ", {0}" .format(USAGE_MSG) msg += ", {0}" .format(USAGE_MSG)
truncated_explanation.extend([ truncated_explanation.extend([
py.builtin._totext(""), six.text_type(""),
py.builtin._totext(msg), six.text_type(msg),
]) ])
return truncated_explanation return truncated_explanation

View File

@ -4,13 +4,10 @@ import pprint
import _pytest._code import _pytest._code
import py import py
try: import six
from collections import Sequence from ..compat import Sequence
except ImportError:
Sequence = list
u = six.text_type
u = py.builtin._totext
# The _reprcompare attribute on the util module is used by the new assertion # The _reprcompare attribute on the util module is used by the new assertion
# interpretation code and assertion rewriter to detect this plugin was # interpretation code and assertion rewriter to detect this plugin was
@ -53,11 +50,11 @@ def _split_explanation(explanation):
""" """
raw_lines = (explanation or u('')).split('\n') raw_lines = (explanation or u('')).split('\n')
lines = [raw_lines[0]] lines = [raw_lines[0]]
for l in raw_lines[1:]: for values in raw_lines[1:]:
if l and l[0] in ['{', '}', '~', '>']: if values and values[0] in ['{', '}', '~', '>']:
lines.append(l) lines.append(values)
else: else:
lines[-1] += '\\n' + l lines[-1] += '\\n' + values
return lines return lines
@ -82,7 +79,7 @@ def _format_lines(lines):
stack.append(len(result)) stack.append(len(result))
stackcnt[-1] += 1 stackcnt[-1] += 1
stackcnt.append(0) stackcnt.append(0)
result.append(u(' +') + u(' ')*(len(stack)-1) + s + line[1:]) result.append(u(' +') + u(' ') * (len(stack) - 1) + s + line[1:])
elif line.startswith('}'): elif line.startswith('}'):
stack.pop() stack.pop()
stackcnt.pop() stackcnt.pop()
@ -91,7 +88,7 @@ def _format_lines(lines):
assert line[0] in ['~', '>'] assert line[0] in ['~', '>']
stack[-1] += 1 stack[-1] += 1
indent = len(stack) if line.startswith('~') else len(stack) - 1 indent = len(stack) if line.startswith('~') else len(stack) - 1
result.append(u(' ')*indent + line[1:]) result.append(u(' ') * indent + line[1:])
assert len(stack) == 1 assert len(stack) == 1
return result return result
@ -106,16 +103,22 @@ except NameError:
def assertrepr_compare(config, op, left, right): def assertrepr_compare(config, op, left, right):
"""Return specialised explanations for some operators/operands""" """Return specialised explanations for some operators/operands"""
width = 80 - 15 - len(op) - 2 # 15 chars indentation, 1 space around op width = 80 - 15 - len(op) - 2 # 15 chars indentation, 1 space around op
left_repr = py.io.saferepr(left, maxsize=int(width//2)) left_repr = py.io.saferepr(left, maxsize=int(width // 2))
right_repr = py.io.saferepr(right, maxsize=width-len(left_repr)) right_repr = py.io.saferepr(right, maxsize=width - len(left_repr))
summary = u('%s %s %s') % (ecu(left_repr), op, ecu(right_repr)) summary = u('%s %s %s') % (ecu(left_repr), op, ecu(right_repr))
issequence = lambda x: (isinstance(x, (list, tuple, Sequence)) and def issequence(x):
not isinstance(x, basestring)) return isinstance(x, Sequence) and not isinstance(x, basestring)
istext = lambda x: isinstance(x, basestring)
isdict = lambda x: isinstance(x, dict) def istext(x):
isset = lambda x: isinstance(x, (set, frozenset)) return isinstance(x, basestring)
def isdict(x):
return isinstance(x, dict)
def isset(x):
return isinstance(x, (set, frozenset))
def isiterable(obj): def isiterable(obj):
try: try:
@ -168,9 +171,9 @@ def _diff_text(left, right, verbose=False):
""" """
from difflib import ndiff from difflib import ndiff
explanation = [] explanation = []
if isinstance(left, py.builtin.bytes): if isinstance(left, six.binary_type):
left = u(repr(left)[1:-1]).replace(r'\n', '\n') left = u(repr(left)[1:-1]).replace(r'\n', '\n')
if isinstance(right, py.builtin.bytes): if isinstance(right, six.binary_type):
right = u(repr(right)[1:-1]).replace(r'\n', '\n') right = u(repr(right)[1:-1]).replace(r'\n', '\n')
if not verbose: if not verbose:
i = 0 # just in case left or right has zero length i = 0 # just in case left or right has zero length
@ -285,7 +288,7 @@ def _compare_eq_dict(left, right, verbose=False):
def _notin_text(term, text, verbose=False): def _notin_text(term, text, verbose=False):
index = text.find(term) index = text.find(term)
head = text[:index] head = text[:index]
tail = text[index+len(term):] tail = text[index + len(term):]
correct_text = head + tail correct_text = head + tail
diff = _diff_text(correct_text, text, verbose) diff = _diff_text(correct_text, text, verbose)
newdiff = [u('%s is contained here:') % py.io.saferepr(term, maxsize=42)] newdiff = [u('%s is contained here:') % py.io.saferepr(term, maxsize=42)]

View File

@ -5,23 +5,39 @@ the name cache was not chosen to ensure pluggy automatically
ignores the external pytest-cache ignores the external pytest-cache
""" """
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
from collections import OrderedDict
import py import py
import six
import pytest import pytest
import json import json
import os
from os.path import sep as _sep, altsep as _altsep from os.path import sep as _sep, altsep as _altsep
class Cache(object): class Cache(object):
def __init__(self, config): def __init__(self, config):
self.config = config self.config = config
self._cachedir = config.rootdir.join(".cache") self._cachedir = Cache.cache_dir_from_config(config)
self.trace = config.trace.root.get("cache") self.trace = config.trace.root.get("cache")
if config.getvalue("cacheclear"): if config.getoption("cacheclear"):
self.trace("clearing cachedir") self.trace("clearing cachedir")
if self._cachedir.check(): if self._cachedir.check():
self._cachedir.remove() self._cachedir.remove()
self._cachedir.mkdir() self._cachedir.mkdir()
@staticmethod
def cache_dir_from_config(config):
cache_dir = config.getini("cache_dir")
cache_dir = os.path.expanduser(cache_dir)
cache_dir = os.path.expandvars(cache_dir)
if os.path.isabs(cache_dir):
return py.path.local(cache_dir)
else:
return config.rootdir.join(cache_dir)
def makedir(self, name): def makedir(self, name):
""" return a directory path object with the given name. If the """ return a directory path object with the given name. If the
directory does not yet exist, it will be created. You can use it directory does not yet exist, it will be created. You can use it
@ -87,33 +103,35 @@ class Cache(object):
json.dump(value, f, indent=2, sort_keys=True) json.dump(value, f, indent=2, sort_keys=True)
class LFPlugin: class LFPlugin(object):
""" Plugin which implements the --lf (run last-failing) option """ """ Plugin which implements the --lf (run last-failing) option """
def __init__(self, config): def __init__(self, config):
self.config = config self.config = config
active_keys = 'lf', 'failedfirst' active_keys = 'lf', 'failedfirst'
self.active = any(config.getvalue(key) for key in active_keys) self.active = any(config.getoption(key) for key in active_keys)
if self.active:
self.lastfailed = config.cache.get("cache/lastfailed", {}) self.lastfailed = config.cache.get("cache/lastfailed", {})
else: self._previously_failed_count = None
self.lastfailed = {} self._no_failures_behavior = self.config.getoption('last_failed_no_failures')
def pytest_report_header(self): def pytest_report_collectionfinish(self):
if self.active: if self.active:
if not self.lastfailed: if not self._previously_failed_count:
mode = "run all (no recorded failures)" mode = "run {} (no recorded failures)".format(self._no_failures_behavior)
else: else:
mode = "rerun last %d failures%s" % ( noun = 'failure' if self._previously_failed_count == 1 else 'failures'
len(self.lastfailed), suffix = " first" if self.config.getoption(
" first" if self.config.getvalue("failedfirst") else "") "failedfirst") else ""
mode = "rerun previous {count} {noun}{suffix}".format(
count=self._previously_failed_count, suffix=suffix, noun=noun
)
return "run-last-failure: %s" % mode return "run-last-failure: %s" % mode
def pytest_runtest_logreport(self, report): def pytest_runtest_logreport(self, report):
if report.failed and "xfail" not in report.keywords: if (report.when == 'call' and report.passed) or report.skipped:
self.lastfailed[report.nodeid] = True
elif not report.failed:
if report.when == "call":
self.lastfailed.pop(report.nodeid, None) self.lastfailed.pop(report.nodeid, None)
elif report.failed:
self.lastfailed[report.nodeid] = True
def pytest_collectreport(self, report): def pytest_collectreport(self, report):
passed = report.outcome in ('passed', 'skipped') passed = report.outcome in ('passed', 'skipped')
@ -127,7 +145,8 @@ class LFPlugin:
self.lastfailed[report.nodeid] = True self.lastfailed[report.nodeid] = True
def pytest_collection_modifyitems(self, session, config, items): def pytest_collection_modifyitems(self, session, config, items):
if self.active and self.lastfailed: if self.active:
if self.lastfailed:
previously_failed = [] previously_failed = []
previously_passed = [] previously_passed = []
for item in items: for item in items:
@ -135,25 +154,63 @@ class LFPlugin:
previously_failed.append(item) previously_failed.append(item)
else: else:
previously_passed.append(item) previously_passed.append(item)
if not previously_failed and previously_passed: self._previously_failed_count = len(previously_failed)
if not previously_failed:
# running a subset of all tests with recorded failures outside # running a subset of all tests with recorded failures outside
# of the set of tests currently executing # of the set of tests currently executing
pass return
elif self.config.getvalue("lf"): if self.config.getoption("lf"):
items[:] = previously_failed items[:] = previously_failed
config.hook.pytest_deselected(items=previously_passed) config.hook.pytest_deselected(items=previously_passed)
else: else:
items[:] = previously_failed + previously_passed items[:] = previously_failed + previously_passed
elif self._no_failures_behavior == 'none':
config.hook.pytest_deselected(items=items)
items[:] = []
def pytest_sessionfinish(self, session): def pytest_sessionfinish(self, session):
config = self.config config = self.config
if config.getvalue("cacheshow") or hasattr(config, "slaveinput"): if config.getoption("cacheshow") or hasattr(config, "slaveinput"):
return return
prev_failed = config.cache.get("cache/lastfailed", None) is not None
if (session.testscollected and prev_failed) or self.lastfailed: saved_lastfailed = config.cache.get("cache/lastfailed", {})
if saved_lastfailed != self.lastfailed:
config.cache.set("cache/lastfailed", self.lastfailed) config.cache.set("cache/lastfailed", self.lastfailed)
class NFPlugin(object):
""" Plugin which implements the --nf (run new-first) option """
def __init__(self, config):
self.config = config
self.active = config.option.newfirst
self.cached_nodeids = config.cache.get("cache/nodeids", [])
def pytest_collection_modifyitems(self, session, config, items):
if self.active:
new_items = OrderedDict()
other_items = OrderedDict()
for item in items:
if item.nodeid not in self.cached_nodeids:
new_items[item.nodeid] = item
else:
other_items[item.nodeid] = item
items[:] = self._get_increasing_order(six.itervalues(new_items)) + \
self._get_increasing_order(six.itervalues(other_items))
self.cached_nodeids = [x.nodeid for x in items if isinstance(x, pytest.Item)]
def _get_increasing_order(self, items):
return sorted(items, key=lambda item: item.fspath.mtime(), reverse=True)
def pytest_sessionfinish(self, session):
config = self.config
if config.getoption("cacheshow") or hasattr(config, "slaveinput"):
return
config.cache.set("cache/nodeids", self.cached_nodeids)
def pytest_addoption(parser): def pytest_addoption(parser):
group = parser.getgroup("general") group = parser.getgroup("general")
group.addoption( group.addoption(
@ -165,12 +222,25 @@ def pytest_addoption(parser):
help="run all tests but run the last failures first. " help="run all tests but run the last failures first. "
"This may re-order tests and thus lead to " "This may re-order tests and thus lead to "
"repeated fixture setup/teardown") "repeated fixture setup/teardown")
group.addoption(
'--nf', '--new-first', action='store_true', dest="newfirst",
help="run tests from new files first, then the rest of the tests "
"sorted by file mtime")
group.addoption( group.addoption(
'--cache-show', action='store_true', dest="cacheshow", '--cache-show', action='store_true', dest="cacheshow",
help="show cache contents, don't perform collection or tests") help="show cache contents, don't perform collection or tests")
group.addoption( group.addoption(
'--cache-clear', action='store_true', dest="cacheclear", '--cache-clear', action='store_true', dest="cacheclear",
help="remove all cache contents at start of test run.") help="remove all cache contents at start of test run.")
parser.addini(
"cache_dir", default='.pytest_cache',
help="cache directory path.")
group.addoption(
'--lfnf', '--last-failed-no-failures', action='store',
dest='last_failed_no_failures', choices=('all', 'none'), default='all',
help='change the behavior when no test failed in the last run or no '
'information about the last failures was found in the cache'
)
def pytest_cmdline_main(config): def pytest_cmdline_main(config):
@ -179,11 +249,11 @@ def pytest_cmdline_main(config):
return wrap_session(config, cacheshow) return wrap_session(config, cacheshow)
@pytest.hookimpl(tryfirst=True) @pytest.hookimpl(tryfirst=True)
def pytest_configure(config): def pytest_configure(config):
config.cache = Cache(config) config.cache = Cache(config)
config.pluginmanager.register(LFPlugin(config), "lfplugin") config.pluginmanager.register(LFPlugin(config), "lfplugin")
config.pluginmanager.register(NFPlugin(config), "nfplugin")
@pytest.fixture @pytest.fixture
@ -236,7 +306,7 @@ def cacheshow(config, session):
if ddir.isdir() and ddir.listdir(): if ddir.isdir() and ddir.listdir():
tw.sep("-", "cache directories") tw.sep("-", "cache directories")
for p in sorted(basedir.join("d").visit()): for p in sorted(basedir.join("d").visit()):
#if p.check(dir=1): # if p.check(dir=1):
# print("%s/" % p.relto(basedir)) # print("%s/" % p.relto(basedir))
if p.isfile(): if p.isfile():
key = p.relto(basedir) key = p.relto(basedir)

View File

@ -4,6 +4,7 @@ per-test stdout/stderr capturing mechanism.
""" """
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import collections
import contextlib import contextlib
import sys import sys
import os import os
@ -11,11 +12,10 @@ import io
from io import UnsupportedOperation from io import UnsupportedOperation
from tempfile import TemporaryFile from tempfile import TemporaryFile
import py import six
import pytest import pytest
from _pytest.compat import CaptureIO from _pytest.compat import CaptureIO
unicode = py.builtin.text
patchsysdict = {0: 'stdin', 1: 'stdout', 2: 'stderr'} patchsysdict = {0: 'stdin', 1: 'stdout', 2: 'stderr'}
@ -36,14 +36,15 @@ def pytest_addoption(parser):
def pytest_load_initial_conftests(early_config, parser, args): def pytest_load_initial_conftests(early_config, parser, args):
ns = early_config.known_args_namespace ns = early_config.known_args_namespace
if ns.capture == "fd": if ns.capture == "fd":
_py36_windowsconsoleio_workaround() _py36_windowsconsoleio_workaround(sys.stdout)
_colorama_workaround()
_readline_workaround() _readline_workaround()
pluginmanager = early_config.pluginmanager pluginmanager = early_config.pluginmanager
capman = CaptureManager(ns.capture) capman = CaptureManager(ns.capture)
pluginmanager.register(capman, "capturemanager") pluginmanager.register(capman, "capturemanager")
# make sure that capturemanager is properly reset at final shutdown # make sure that capturemanager is properly reset at final shutdown
early_config.add_cleanup(capman.reset_capturings) early_config.add_cleanup(capman.stop_global_capturing)
# make sure logging does not raise exceptions at the end # make sure logging does not raise exceptions at the end
def silence_logging_at_shutdown(): def silence_logging_at_shutdown():
@ -52,17 +53,30 @@ def pytest_load_initial_conftests(early_config, parser, args):
early_config.add_cleanup(silence_logging_at_shutdown) early_config.add_cleanup(silence_logging_at_shutdown)
# finally trigger conftest loading but while capturing (issue93) # finally trigger conftest loading but while capturing (issue93)
capman.init_capturings() capman.start_global_capturing()
outcome = yield outcome = yield
out, err = capman.suspendcapture() out, err = capman.suspend_global_capture()
if outcome.excinfo is not None: if outcome.excinfo is not None:
sys.stdout.write(out) sys.stdout.write(out)
sys.stderr.write(err) sys.stderr.write(err)
class CaptureManager: class CaptureManager(object):
"""
Capture plugin, manages that the appropriate capture method is enabled/disabled during collection and each
test phase (setup, call, teardown). After each of those points, the captured output is obtained and
attached to the collection/runtest report.
There are two levels of capture:
* global: which is enabled by default and can be suppressed by the ``-s`` option. This is always enabled/disabled
during collection and each test phase.
* fixture: when a test function or one of its fixture depend on the ``capsys`` or ``capfd`` fixtures. In this
case special handling is needed to ensure the fixtures take precedence over the global capture.
"""
def __init__(self, method): def __init__(self, method):
self._method = method self._method = method
self._global_capturing = None
def _getcapture(self, method): def _getcapture(self, method):
if method == "fd": if method == "fd":
@ -74,23 +88,24 @@ class CaptureManager:
else: else:
raise ValueError("unknown capturing method: %r" % method) raise ValueError("unknown capturing method: %r" % method)
def init_capturings(self): def start_global_capturing(self):
assert not hasattr(self, "_capturing") assert self._global_capturing is None
self._capturing = self._getcapture(self._method) self._global_capturing = self._getcapture(self._method)
self._capturing.start_capturing() self._global_capturing.start_capturing()
def reset_capturings(self): def stop_global_capturing(self):
cap = self.__dict__.pop("_capturing", None) if self._global_capturing is not None:
if cap is not None: self._global_capturing.pop_outerr_to_orig()
cap.pop_outerr_to_orig() self._global_capturing.stop_capturing()
cap.stop_capturing() self._global_capturing = None
def resumecapture(self): def resume_global_capture(self):
self._capturing.resume_capturing() self._global_capturing.resume_capturing()
def suspendcapture(self, in_=False): def suspend_global_capture(self, item=None, in_=False):
self.deactivate_funcargs() if item is not None:
cap = getattr(self, "_capturing", None) self.deactivate_fixture(item)
cap = getattr(self, "_global_capturing", None)
if cap is not None: if cap is not None:
try: try:
outerr = cap.readouterr() outerr = cap.readouterr()
@ -98,23 +113,26 @@ class CaptureManager:
cap.suspend_capturing(in_=in_) cap.suspend_capturing(in_=in_)
return outerr return outerr
def activate_funcargs(self, pyfuncitem): def activate_fixture(self, item):
capfuncarg = pyfuncitem.__dict__.pop("_capfuncarg", None) """If the current item is using ``capsys`` or ``capfd``, activate them so they take precedence over
if capfuncarg is not None: the global capture.
capfuncarg._start() """
self._capfuncarg = capfuncarg fixture = getattr(item, "_capture_fixture", None)
if fixture is not None:
fixture._start()
def deactivate_funcargs(self): def deactivate_fixture(self, item):
capfuncarg = self.__dict__.pop("_capfuncarg", None) """Deactivates the ``capsys`` or ``capfd`` fixture of this item, if any."""
if capfuncarg is not None: fixture = getattr(item, "_capture_fixture", None)
capfuncarg.close() if fixture is not None:
fixture.close()
@pytest.hookimpl(hookwrapper=True) @pytest.hookimpl(hookwrapper=True)
def pytest_make_collect_report(self, collector): def pytest_make_collect_report(self, collector):
if isinstance(collector, pytest.File): if isinstance(collector, pytest.File):
self.resumecapture() self.resume_global_capture()
outcome = yield outcome = yield
out, err = self.suspendcapture() out, err = self.suspend_global_capture()
rep = outcome.get_result() rep = outcome.get_result()
if out: if out:
rep.sections.append(("Captured stdout", out)) rep.sections.append(("Captured stdout", out))
@ -125,67 +143,139 @@ class CaptureManager:
@pytest.hookimpl(hookwrapper=True) @pytest.hookimpl(hookwrapper=True)
def pytest_runtest_setup(self, item): def pytest_runtest_setup(self, item):
self.resumecapture() self.resume_global_capture()
# no need to activate a capture fixture because they activate themselves during creation; this
# only makes sense when a fixture uses a capture fixture, otherwise the capture fixture will
# be activated during pytest_runtest_call
yield yield
self.suspendcapture_item(item, "setup") self.suspend_capture_item(item, "setup")
@pytest.hookimpl(hookwrapper=True) @pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(self, item): def pytest_runtest_call(self, item):
self.resumecapture() self.resume_global_capture()
self.activate_funcargs(item) # it is important to activate this fixture during the call phase so it overwrites the "global"
# capture
self.activate_fixture(item)
yield yield
#self.deactivate_funcargs() called from suspendcapture() self.suspend_capture_item(item, "call")
self.suspendcapture_item(item, "call")
@pytest.hookimpl(hookwrapper=True) @pytest.hookimpl(hookwrapper=True)
def pytest_runtest_teardown(self, item): def pytest_runtest_teardown(self, item):
self.resumecapture() self.resume_global_capture()
self.activate_fixture(item)
yield yield
self.suspendcapture_item(item, "teardown") self.suspend_capture_item(item, "teardown")
@pytest.hookimpl(tryfirst=True) @pytest.hookimpl(tryfirst=True)
def pytest_keyboard_interrupt(self, excinfo): def pytest_keyboard_interrupt(self, excinfo):
self.reset_capturings() self.stop_global_capturing()
@pytest.hookimpl(tryfirst=True) @pytest.hookimpl(tryfirst=True)
def pytest_internalerror(self, excinfo): def pytest_internalerror(self, excinfo):
self.reset_capturings() self.stop_global_capturing()
def suspendcapture_item(self, item, when, in_=False): def suspend_capture_item(self, item, when, in_=False):
out, err = self.suspendcapture(in_=in_) out, err = self.suspend_global_capture(item, in_=in_)
item.add_report_section(when, "stdout", out) item.add_report_section(when, "stdout", out)
item.add_report_section(when, "stderr", err) item.add_report_section(when, "stderr", err)
error_capsysfderror = "cannot use capsys and capfd at the same time" capture_fixtures = {'capfd', 'capfdbinary', 'capsys', 'capsysbinary'}
def _ensure_only_one_capture_fixture(request, name):
fixtures = set(request.fixturenames) & capture_fixtures - set((name,))
if fixtures:
fixtures = sorted(fixtures)
fixtures = fixtures[0] if len(fixtures) == 1 else fixtures
raise request.raiseerror(
"cannot use {0} and {1} at the same time".format(
fixtures, name,
),
)
@pytest.fixture @pytest.fixture
def capsys(request): def capsys(request):
"""Enable capturing of writes to sys.stdout/sys.stderr and make """Enable capturing of writes to ``sys.stdout`` and ``sys.stderr`` and make
captured output available via ``capsys.readouterr()`` method calls captured output available via ``capsys.readouterr()`` method calls
which return a ``(out, err)`` tuple. which return a ``(out, err)`` namedtuple. ``out`` and ``err`` will be ``text``
objects.
""" """
if "capfd" in request.fixturenames: _ensure_only_one_capture_fixture(request, 'capsys')
raise request.raiseerror(error_capsysfderror) with _install_capture_fixture_on_item(request, SysCapture) as fixture:
request.node._capfuncarg = c = CaptureFixture(SysCapture, request) yield fixture
return c
@pytest.fixture
def capsysbinary(request):
"""Enable capturing of writes to ``sys.stdout`` and ``sys.stderr`` and make
captured output available via ``capsys.readouterr()`` method calls
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be ``bytes``
objects.
"""
_ensure_only_one_capture_fixture(request, 'capsysbinary')
# Currently, the implementation uses the python3 specific `.buffer`
# property of CaptureIO.
if sys.version_info < (3,):
raise request.raiseerror('capsysbinary is only supported on python 3')
with _install_capture_fixture_on_item(request, SysCaptureBinary) as fixture:
yield fixture
@pytest.fixture @pytest.fixture
def capfd(request): def capfd(request):
"""Enable capturing of writes to file descriptors 1 and 2 and make """Enable capturing of writes to file descriptors ``1`` and ``2`` and make
captured output available via ``capfd.readouterr()`` method calls captured output available via ``capfd.readouterr()`` method calls
which return a ``(out, err)`` tuple. which return a ``(out, err)`` tuple. ``out`` and ``err`` will be ``text``
objects.
""" """
if "capsys" in request.fixturenames: _ensure_only_one_capture_fixture(request, 'capfd')
request.raiseerror(error_capsysfderror)
if not hasattr(os, 'dup'): if not hasattr(os, 'dup'):
pytest.skip("capfd funcarg needs os.dup") pytest.skip("capfd fixture needs os.dup function which is not available in this system")
request.node._capfuncarg = c = CaptureFixture(FDCapture, request) with _install_capture_fixture_on_item(request, FDCapture) as fixture:
return c yield fixture
class CaptureFixture: @pytest.fixture
def capfdbinary(request):
"""Enable capturing of write to file descriptors 1 and 2 and make
captured output available via ``capfdbinary.readouterr`` method calls
which return a ``(out, err)`` tuple. ``out`` and ``err`` will be
``bytes`` objects.
"""
_ensure_only_one_capture_fixture(request, 'capfdbinary')
if not hasattr(os, 'dup'):
pytest.skip("capfdbinary fixture needs os.dup function which is not available in this system")
with _install_capture_fixture_on_item(request, FDCaptureBinary) as fixture:
yield fixture
@contextlib.contextmanager
def _install_capture_fixture_on_item(request, capture_class):
"""
Context manager which creates a ``CaptureFixture`` instance and "installs" it on
the item/node of the given request. Used by ``capsys`` and ``capfd``.
The CaptureFixture is added as attribute of the item because it needs to accessed
by ``CaptureManager`` during its ``pytest_runtest_*`` hooks.
"""
request.node._capture_fixture = fixture = CaptureFixture(capture_class, request)
capmanager = request.config.pluginmanager.getplugin('capturemanager')
# need to active this fixture right away in case it is being used by another fixture (setup phase)
# if this fixture is being used only by a test function (call phase), then we wouldn't need this
# activation, but it doesn't hurt
capmanager.activate_fixture(request.node)
yield fixture
fixture.close()
del request.node._capture_fixture
class CaptureFixture(object):
"""
Object returned by :py:func:`capsys`, :py:func:`capsysbinary`, :py:func:`capfd` and :py:func:`capfdbinary`
fixtures.
"""
def __init__(self, captureclass, request): def __init__(self, captureclass, request):
self.captureclass = captureclass self.captureclass = captureclass
self.request = request self.request = request
@ -202,6 +292,10 @@ class CaptureFixture:
cap.stop_capturing() cap.stop_capturing()
def readouterr(self): def readouterr(self):
"""Read and return the captured output so far, resetting the internal buffer.
:return: captured content as a namedtuple with ``out`` and ``err`` string attributes
"""
try: try:
return self._capture.readouterr() return self._capture.readouterr()
except AttributeError: except AttributeError:
@ -209,12 +303,15 @@ class CaptureFixture:
@contextlib.contextmanager @contextlib.contextmanager
def disabled(self): def disabled(self):
"""Temporarily disables capture while inside the 'with' block."""
self._capture.suspend_capturing()
capmanager = self.request.config.pluginmanager.getplugin('capturemanager') capmanager = self.request.config.pluginmanager.getplugin('capturemanager')
capmanager.suspendcapture_item(self.request.node, "call", in_=True) capmanager.suspend_global_capture(item=None, in_=False)
try: try:
yield yield
finally: finally:
capmanager.resumecapture() capmanager.resume_global_capture()
self._capture.resume_capturing()
def safe_text_dupfile(f, mode, default_encoding="UTF8"): def safe_text_dupfile(f, mode, default_encoding="UTF8"):
@ -238,12 +335,13 @@ def safe_text_dupfile(f, mode, default_encoding="UTF8"):
class EncodedFile(object): class EncodedFile(object):
errors = "strict" # possibly needed by py3 code (issue555) errors = "strict" # possibly needed by py3 code (issue555)
def __init__(self, buffer, encoding): def __init__(self, buffer, encoding):
self.buffer = buffer self.buffer = buffer
self.encoding = encoding self.encoding = encoding
def write(self, obj): def write(self, obj):
if isinstance(obj, unicode): if isinstance(obj, six.text_type):
obj = obj.encode(self.encoding, "replace") obj = obj.encode(self.encoding, "replace")
self.buffer.write(obj) self.buffer.write(obj)
@ -251,10 +349,18 @@ class EncodedFile(object):
data = ''.join(linelist) data = ''.join(linelist)
self.write(data) self.write(data)
@property
def name(self):
"""Ensure that file.name is a string."""
return repr(self.buffer)
def __getattr__(self, name): def __getattr__(self, name):
return getattr(object.__getattribute__(self, "buffer"), name) return getattr(object.__getattribute__(self, "buffer"), name)
CaptureResult = collections.namedtuple("CaptureResult", ["out", "err"])
class MultiCapture(object): class MultiCapture(object):
out = err = in_ = None out = err = in_ = None
@ -315,14 +421,19 @@ class MultiCapture(object):
def readouterr(self): def readouterr(self):
""" return snapshot unicode value of stdout/stderr capturings. """ """ return snapshot unicode value of stdout/stderr capturings. """
return (self.out.snap() if self.out is not None else "", return CaptureResult(self.out.snap() if self.out is not None else "",
self.err.snap() if self.err is not None else "") self.err.snap() if self.err is not None else "")
class NoCapture:
class NoCapture(object):
__init__ = start = done = suspend = resume = lambda *args: None __init__ = start = done = suspend = resume = lambda *args: None
class FDCapture:
""" Capture IO to/from a given os-level filedescriptor. """ class FDCaptureBinary(object):
"""Capture IO to/from a given os-level filedescriptor.
snap() produces `bytes`
"""
def __init__(self, targetfd, tmpfile=None): def __init__(self, targetfd, tmpfile=None):
self.targetfd = targetfd self.targetfd = targetfd
@ -361,17 +472,11 @@ class FDCapture:
self.syscapture.start() self.syscapture.start()
def snap(self): def snap(self):
f = self.tmpfile self.tmpfile.seek(0)
f.seek(0) res = self.tmpfile.read()
res = f.read() self.tmpfile.seek(0)
if res: self.tmpfile.truncate()
enc = getattr(f, "encoding", None)
if enc and isinstance(res, bytes):
res = py.builtin._totext(res, enc, "replace")
f.truncate(0)
f.seek(0)
return res return res
return ''
def done(self): def done(self):
""" stop capturing, restore streams, return original capture file, """ stop capturing, restore streams, return original capture file,
@ -380,7 +485,7 @@ class FDCapture:
os.dup2(targetfd_save, self.targetfd) os.dup2(targetfd_save, self.targetfd)
os.close(targetfd_save) os.close(targetfd_save)
self.syscapture.done() self.syscapture.done()
self.tmpfile.close() _attempt_to_close_capture_file(self.tmpfile)
def suspend(self): def suspend(self):
self.syscapture.suspend() self.syscapture.suspend()
@ -392,12 +497,25 @@ class FDCapture:
def writeorg(self, data): def writeorg(self, data):
""" write to original file descriptor. """ """ write to original file descriptor. """
if py.builtin._istext(data): if isinstance(data, six.text_type):
data = data.encode("utf8") # XXX use encoding of original stream data = data.encode("utf8") # XXX use encoding of original stream
os.write(self.targetfd_save, data) os.write(self.targetfd_save, data)
class SysCapture: class FDCapture(FDCaptureBinary):
"""Capture IO to/from a given os-level filedescriptor.
snap() produces text
"""
def snap(self):
res = FDCaptureBinary.snap(self)
enc = getattr(self.tmpfile, "encoding", None)
if enc and isinstance(res, bytes):
res = six.text_type(res, enc, "replace")
return res
class SysCapture(object):
def __init__(self, fd, tmpfile=None): def __init__(self, fd, tmpfile=None):
name = patchsysdict[fd] name = patchsysdict[fd]
self._old = getattr(sys, name) self._old = getattr(sys, name)
@ -413,16 +531,15 @@ class SysCapture:
setattr(sys, self.name, self.tmpfile) setattr(sys, self.name, self.tmpfile)
def snap(self): def snap(self):
f = self.tmpfile res = self.tmpfile.getvalue()
res = f.getvalue() self.tmpfile.seek(0)
f.truncate(0) self.tmpfile.truncate()
f.seek(0)
return res return res
def done(self): def done(self):
setattr(sys, self.name, self._old) setattr(sys, self.name, self._old)
del self._old del self._old
self.tmpfile.close() _attempt_to_close_capture_file(self.tmpfile)
def suspend(self): def suspend(self):
setattr(sys, self.name, self._old) setattr(sys, self.name, self._old)
@ -435,7 +552,15 @@ class SysCapture:
self._old.flush() self._old.flush()
class DontReadFromInput: class SysCaptureBinary(SysCapture):
def snap(self):
res = self.tmpfile.buffer.getvalue()
self.tmpfile.seek(0)
self.tmpfile.truncate()
return res
class DontReadFromInput(six.Iterator):
"""Temporary stub class. Ideally when stdin is accessed, the """Temporary stub class. Ideally when stdin is accessed, the
capturing should be turned off, with possibly all data captured capturing should be turned off, with possibly all data captured
so far sent to the screen. This should be configurable, though, so far sent to the screen. This should be configurable, though,
@ -449,7 +574,10 @@ class DontReadFromInput:
raise IOError("reading from stdin while output is captured") raise IOError("reading from stdin while output is captured")
readline = read readline = read
readlines = read readlines = read
__iter__ = read __next__ = read
def __iter__(self):
return self
def fileno(self): def fileno(self):
raise UnsupportedOperation("redirected stdin is pseudofile, " raise UnsupportedOperation("redirected stdin is pseudofile, "
@ -463,12 +591,30 @@ class DontReadFromInput:
@property @property
def buffer(self): def buffer(self):
if sys.version_info >= (3,0): if sys.version_info >= (3, 0):
return self return self
else: else:
raise AttributeError('redirected stdin has no attribute buffer') raise AttributeError('redirected stdin has no attribute buffer')
def _colorama_workaround():
"""
Ensure colorama is imported so that it attaches to the correct stdio
handles on Windows.
colorama uses the terminal on import time. So if something does the
first import of colorama while I/O capture is active, colorama will
fail in various ways.
"""
if not sys.platform.startswith('win32'):
return
try:
import colorama # noqa
except ImportError:
pass
def _readline_workaround(): def _readline_workaround():
""" """
Ensure readline is imported so that it attaches to the correct stdio Ensure readline is imported so that it attaches to the correct stdio
@ -496,7 +642,7 @@ def _readline_workaround():
pass pass
def _py36_windowsconsoleio_workaround(): def _py36_windowsconsoleio_workaround(stream):
""" """
Python 3.6 implemented unicode console handling for Windows. This works Python 3.6 implemented unicode console handling for Windows. This works
by reading/writing to the raw console handle using by reading/writing to the raw console handle using
@ -513,13 +659,20 @@ def _py36_windowsconsoleio_workaround():
also means a different handle by replicating the logic in also means a different handle by replicating the logic in
"Py_lifecycle.c:initstdio/create_stdio". "Py_lifecycle.c:initstdio/create_stdio".
:param stream: in practice ``sys.stdout`` or ``sys.stderr``, but given
here as parameter for unittesting purposes.
See https://github.com/pytest-dev/py/issues/103 See https://github.com/pytest-dev/py/issues/103
""" """
if not sys.platform.startswith('win32') or sys.version_info[:2] < (3, 6): if not sys.platform.startswith('win32') or sys.version_info[:2] < (3, 6):
return return
buffered = hasattr(sys.stdout.buffer, 'raw') # bail out if ``stream`` doesn't seem like a proper ``io`` stream (#2666)
raw_stdout = sys.stdout.buffer.raw if buffered else sys.stdout.buffer if not hasattr(stream, 'buffer'):
return
buffered = hasattr(stream.buffer, 'raw')
raw_stdout = stream.buffer.raw if buffered else stream.buffer
if not isinstance(raw_stdout, io._WindowsConsoleIO): if not isinstance(raw_stdout, io._WindowsConsoleIO):
return return
@ -540,3 +693,14 @@ def _py36_windowsconsoleio_workaround():
sys.__stdin__ = sys.stdin = _reopen_stdio(sys.stdin, 'rb') sys.__stdin__ = sys.stdin = _reopen_stdio(sys.stdin, 'rb')
sys.__stdout__ = sys.stdout = _reopen_stdio(sys.stdout, 'wb') sys.__stdout__ = sys.stdout = _reopen_stdio(sys.stdout, 'wb')
sys.__stderr__ = sys.stderr = _reopen_stdio(sys.stderr, 'wb') sys.__stderr__ = sys.stderr = _reopen_stdio(sys.stderr, 'wb')
def _attempt_to_close_capture_file(f):
"""Suppress IOError when closing the temporary file used for capturing streams in py27 (#2370)"""
if six.PY2:
try:
f.close()
except IOError:
pass
else:
f.close()

View File

@ -2,17 +2,17 @@
python version compatibility code python version compatibility code
""" """
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import sys
import inspect import codecs
import types
import re
import functools import functools
import inspect
import re
import sys
import py import py
import _pytest import _pytest
from _pytest.outcomes import TEST_OUTCOME
try: try:
import enum import enum
@ -25,6 +25,12 @@ _PY3 = sys.version_info > (3, 0)
_PY2 = not _PY3 _PY2 = not _PY3
if _PY3:
from inspect import signature, Parameter as Parameter
else:
from funcsigs import signature, Parameter as Parameter
NoneType = type(None) NoneType = type(None)
NOTSET = object() NOTSET = object()
@ -32,12 +38,18 @@ PY35 = sys.version_info[:2] >= (3, 5)
PY36 = sys.version_info[:2] >= (3, 6) PY36 = sys.version_info[:2] >= (3, 6)
MODULE_NOT_FOUND_ERROR = 'ModuleNotFoundError' if PY36 else 'ImportError' MODULE_NOT_FOUND_ERROR = 'ModuleNotFoundError' if PY36 else 'ImportError'
if hasattr(inspect, 'signature'): if _PY3:
def _format_args(func): from collections.abc import MutableMapping as MappingMixin # noqa
return str(inspect.signature(func)) from collections.abc import Sequence # noqa
else: else:
def _format_args(func): # those raise DeprecationWarnings in Python >=3.7
return inspect.formatargspec(*inspect.getargspec(func)) from collections import MutableMapping as MappingMixin # noqa
from collections import Sequence # noqa
def _format_args(func):
return str(signature(func))
isfunction = inspect.isfunction isfunction = inspect.isfunction
isclass = inspect.isclass isclass = inspect.isclass
@ -63,12 +75,11 @@ def iscoroutinefunction(func):
def getlocation(function, curdir): def getlocation(function, curdir):
import inspect
fn = py.path.local(inspect.getfile(function)) fn = py.path.local(inspect.getfile(function))
lineno = py.builtin._getcode(function).co_firstlineno lineno = py.builtin._getcode(function).co_firstlineno
if fn.relto(curdir): if fn.relto(curdir):
fn = fn.relto(curdir) fn = fn.relto(curdir)
return "%s:%d" %(fn, lineno+1) return "%s:%d" % (fn, lineno + 1)
def num_mock_patch_args(function): def num_mock_patch_args(function):
@ -76,59 +87,72 @@ def num_mock_patch_args(function):
patchings = getattr(function, "patchings", None) patchings = getattr(function, "patchings", None)
if not patchings: if not patchings:
return 0 return 0
mock = sys.modules.get("mock", sys.modules.get("unittest.mock", None)) mock_modules = [sys.modules.get("mock"), sys.modules.get("unittest.mock")]
if mock is not None: if any(mock_modules):
sentinels = [m.DEFAULT for m in mock_modules if m is not None]
return len([p for p in patchings return len([p for p in patchings
if not p.attribute_name and p.new is mock.DEFAULT]) if not p.attribute_name and p.new in sentinels])
return len(patchings) return len(patchings)
def getfuncargnames(function, startindex=None): def getfuncargnames(function, is_method=False, cls=None):
# XXX merge with main.py's varnames """Returns the names of a function's mandatory arguments.
#assert not isclass(function)
realfunction = function
while hasattr(realfunction, "__wrapped__"):
realfunction = realfunction.__wrapped__
if startindex is None:
startindex = inspect.ismethod(function) and 1 or 0
if realfunction != function:
startindex += num_mock_patch_args(function)
function = realfunction
if isinstance(function, functools.partial):
argnames = inspect.getargs(_pytest._code.getrawcode(function.func))[0]
partial = function
argnames = argnames[len(partial.args):]
if partial.keywords:
for kw in partial.keywords:
argnames.remove(kw)
else:
argnames = inspect.getargs(_pytest._code.getrawcode(function))[0]
defaults = getattr(function, 'func_defaults',
getattr(function, '__defaults__', None)) or ()
numdefaults = len(defaults)
if numdefaults:
return tuple(argnames[startindex:-numdefaults])
return tuple(argnames[startindex:])
This should return the names of all function arguments that:
* Aren't bound to an instance or type as in instance or class methods.
* Don't have default values.
* Aren't bound with functools.partial.
* Aren't replaced with mocks.
The is_method and cls arguments indicate that the function should
be treated as a bound method even though it's not unless, only in
the case of cls, the function is a static method.
@RonnyPfannschmidt: This function should be refactored when we
revisit fixtures. The fixture mechanism should ask the node for
the fixture names, and not try to obtain directly from the
function object well after collection has occurred.
if sys.version_info[:2] == (2, 6):
def isclass(object):
""" Return true if the object is a class. Overrides inspect.isclass for
python 2.6 because it will return True for objects which always return
something on __getattr__ calls (see #1035).
Backport of https://hg.python.org/cpython/rev/35bf8f7a8edc
""" """
return isinstance(object, (type, types.ClassType)) # The parameters attribute of a Signature object contains an
# ordered mapping of parameter names to Parameter instances. This
# creates a tuple of the names of the parameters that don't have
# defaults.
arg_names = tuple(p.name for p in signature(function).parameters.values()
if (p.kind is Parameter.POSITIONAL_OR_KEYWORD or
p.kind is Parameter.KEYWORD_ONLY) and
p.default is Parameter.empty)
# If this function should be treated as a bound method even though
# it's passed as an unbound method or function, remove the first
# parameter name.
if (is_method or
(cls and not isinstance(cls.__dict__.get(function.__name__, None),
staticmethod))):
arg_names = arg_names[1:]
# Remove any names that will be replaced with mocks.
if hasattr(function, "__wrapped__"):
arg_names = arg_names[num_mock_patch_args(function):]
return arg_names
if _PY3: if _PY3:
import codecs
imap = map
STRING_TYPES = bytes, str STRING_TYPES = bytes, str
UNICODE_TYPES = str, UNICODE_TYPES = str,
def _escape_strings(val): if PY35:
def _bytes_to_ascii(val):
return val.decode('ascii', 'backslashreplace')
else:
def _bytes_to_ascii(val):
if val:
# source: http://goo.gl/bGsnwC
encoded_bytes, _ = codecs.escape_encode(val)
return encoded_bytes.decode('ascii')
else:
# empty bytes crashes codecs.escape_encode (#1087)
return ''
def ascii_escaped(val):
"""If val is pure ascii, returns it as a str(). Otherwise, escapes """If val is pure ascii, returns it as a str(). Otherwise, escapes
bytes objects into a sequence of escaped bytes: bytes objects into a sequence of escaped bytes:
@ -147,22 +171,14 @@ if _PY3:
""" """
if isinstance(val, bytes): if isinstance(val, bytes):
if val: return _bytes_to_ascii(val)
# source: http://goo.gl/bGsnwC
encoded_bytes, _ = codecs.escape_encode(val)
return encoded_bytes.decode('ascii')
else:
# empty bytes crashes codecs.escape_encode (#1087)
return ''
else: else:
return val.encode('unicode_escape').decode('ascii') return val.encode('unicode_escape').decode('ascii')
else: else:
STRING_TYPES = bytes, str, unicode STRING_TYPES = bytes, str, unicode
UNICODE_TYPES = unicode, UNICODE_TYPES = unicode,
from itertools import imap # NOQA def ascii_escaped(val):
def _escape_strings(val):
"""In py2 bytes and str are the same type, so return if it's a bytes """In py2 bytes and str are the same type, so return if it's a bytes
object, return it unchanged if it is a full ascii string, object, return it unchanged if it is a full ascii string,
otherwise escape it into its binary form. otherwise escape it into its binary form.
@ -214,22 +230,21 @@ def getfslineno(obj):
def getimfunc(func): def getimfunc(func):
try: try:
return func.__func__ return func.__func__
except AttributeError:
try:
return func.im_func
except AttributeError: except AttributeError:
return func return func
def safe_getattr(object, name, default): def safe_getattr(object, name, default):
""" Like getattr but return default upon any Exception. """ Like getattr but return default upon any Exception or any OutcomeException.
Attribute access can potentially fail for 'evil' Python objects. Attribute access can potentially fail for 'evil' Python objects.
See issue #214. See issue #214.
It catches OutcomeException because of #2490 (issue #580), new outcomes are derived from BaseException
instead of Exception (for more details check #2707)
""" """
try: try:
return getattr(object, name, default) return getattr(object, name, default)
except Exception: except TEST_OUTCOME:
return default return default
@ -283,7 +298,15 @@ def _setup_collect_fakemodule():
if _PY2: if _PY2:
from py.io import TextIO as CaptureIO # Without this the test_dupfile_on_textio will fail, otherwise CaptureIO could directly inherit from StringIO.
from py.io import TextIO
class CaptureIO(TextIO):
@property
def encoding(self):
return getattr(self, '_encoding', 'UTF-8')
else: else:
import io import io
@ -297,6 +320,7 @@ else:
def getvalue(self): def getvalue(self):
return self.buffer.getvalue().decode('UTF-8') return self.buffer.getvalue().decode('UTF-8')
class FuncargnamesCompatAttr(object): class FuncargnamesCompatAttr(object):
""" helper class so that Metafunc, Function and FixtureRequest """ helper class so that Metafunc, Function and FixtureRequest
don't need to each define the "funcargnames" compatibility attribute. don't need to each define the "funcargnames" compatibility attribute.

View File

@ -5,15 +5,18 @@ import shlex
import traceback import traceback
import types import types
import warnings import warnings
import copy
import six
import py import py
# DON't import pytest here because it causes import cycle troubles # DON't import pytest here because it causes import cycle troubles
import sys import sys
import os import os
from _pytest.outcomes import Skipped
import _pytest._code import _pytest._code
import _pytest.hookspec # the extension point definitions import _pytest.hookspec # the extension point definitions
import _pytest.assertion import _pytest.assertion
from _pytest._pluggy import PluginManager, HookimplMarker, HookspecMarker from pluggy import PluginManager, HookimplMarker, HookspecMarker
from _pytest.compat import safe_str from _pytest.compat import safe_str
hookimpl = HookimplMarker("pytest") hookimpl = HookimplMarker("pytest")
@ -51,7 +54,7 @@ def main(args=None, plugins=None):
tw = py.io.TerminalWriter(sys.stderr) tw = py.io.TerminalWriter(sys.stderr)
for line in traceback.format_exception(*e.excinfo): for line in traceback.format_exception(*e.excinfo):
tw.line(line.rstrip(), red=True) tw.line(line.rstrip(), red=True)
tw.line("ERROR: could not load %s\n" % (e.path), red=True) tw.line("ERROR: could not load %s\n" % (e.path,), red=True)
return 4 return 4
else: else:
try: try:
@ -59,11 +62,13 @@ def main(args=None, plugins=None):
finally: finally:
config._ensure_unconfigure() config._ensure_unconfigure()
except UsageError as e: except UsageError as e:
tw = py.io.TerminalWriter(sys.stderr)
for msg in e.args: for msg in e.args:
sys.stderr.write("ERROR: %s\n" %(msg,)) tw.line("ERROR: {}\n".format(msg), red=True)
return 4 return 4
class cmdline: # compatibility namespace
class cmdline(object): # NOQA compatibility namespace
main = staticmethod(main) main = staticmethod(main)
@ -99,26 +104,18 @@ def directory_arg(path, optname):
return path return path
_preinit = []
default_plugins = ( default_plugins = (
"mark main terminal runner python fixtures debugging unittest capture skipping " "mark main terminal runner python fixtures debugging unittest capture skipping "
"tmpdir monkeypatch recwarn pastebin helpconfig nose assertion " "tmpdir monkeypatch recwarn pastebin helpconfig nose assertion "
"junitxml resultlog doctest cacheprovider freeze_support " "junitxml resultlog doctest cacheprovider freeze_support "
"setuponly setupplan warnings").split() "setuponly setupplan warnings logging").split()
builtin_plugins = set(default_plugins) builtin_plugins = set(default_plugins)
builtin_plugins.add("pytester") builtin_plugins.add("pytester")
def _preloadplugins():
assert not _preinit
_preinit.append(get_config())
def get_config(): def get_config():
if _preinit:
return _preinit.pop(0)
# subsequent calls to main will create a fresh instance # subsequent calls to main will create a fresh instance
pluginmanager = PytestPluginManager() pluginmanager = PytestPluginManager()
config = Config(pluginmanager) config = Config(pluginmanager)
@ -126,6 +123,7 @@ def get_config():
pluginmanager.import_plugin(spec) pluginmanager.import_plugin(spec)
return config return config
def get_plugin_manager(): def get_plugin_manager():
""" """
Obtain a new instance of the Obtain a new instance of the
@ -137,6 +135,7 @@ def get_plugin_manager():
""" """
return get_config().pluginmanager return get_config().pluginmanager
def _prepareconfig(args=None, plugins=None): def _prepareconfig(args=None, plugins=None):
warning = None warning = None
if args is None: if args is None:
@ -154,7 +153,7 @@ def _prepareconfig(args=None, plugins=None):
try: try:
if plugins: if plugins:
for plugin in plugins: for plugin in plugins:
if isinstance(plugin, py.builtin._basestring): if isinstance(plugin, six.string_types):
pluginmanager.consider_pluginarg(plugin) pluginmanager.consider_pluginarg(plugin)
else: else:
pluginmanager.register(plugin) pluginmanager.register(plugin)
@ -169,13 +168,14 @@ def _prepareconfig(args=None, plugins=None):
class PytestPluginManager(PluginManager): class PytestPluginManager(PluginManager):
""" """
Overwrites :py:class:`pluggy.PluginManager <_pytest.vendored_packages.pluggy.PluginManager>` to add pytest-specific Overwrites :py:class:`pluggy.PluginManager <pluggy.PluginManager>` to add pytest-specific
functionality: functionality:
* loading plugins from the command line, ``PYTEST_PLUGIN`` env variable and * loading plugins from the command line, ``PYTEST_PLUGINS`` env variable and
``pytest_plugins`` global variables found in plugins being loaded; ``pytest_plugins`` global variables found in plugins being loaded;
* ``conftest.py`` loading during start-up; * ``conftest.py`` loading during start-up;
""" """
def __init__(self): def __init__(self):
super(PytestPluginManager, self).__init__("pytest", implprefix="pytest_") super(PytestPluginManager, self).__init__("pytest", implprefix="pytest_")
self._conftest_plugins = set() self._conftest_plugins = set()
@ -201,12 +201,15 @@ class PytestPluginManager(PluginManager):
# Config._consider_importhook will set a real object if required. # Config._consider_importhook will set a real object if required.
self.rewrite_hook = _pytest.assertion.DummyRewriteHook() self.rewrite_hook = _pytest.assertion.DummyRewriteHook()
# Used to know when we are importing conftests after the pytest_configure stage
self._configured = False
def addhooks(self, module_or_class): def addhooks(self, module_or_class):
""" """
.. deprecated:: 2.8 .. deprecated:: 2.8
Use :py:meth:`pluggy.PluginManager.add_hookspecs <_pytest.vendored_packages.pluggy.PluginManager.add_hookspecs>` instead. Use :py:meth:`pluggy.PluginManager.add_hookspecs <PluginManager.add_hookspecs>`
instead.
""" """
warning = dict(code="I2", warning = dict(code="I2",
fslocation=_pytest._code.getfslineno(sys._getframe(1)), fslocation=_pytest._code.getfslineno(sys._getframe(1)),
@ -243,18 +246,12 @@ class PytestPluginManager(PluginManager):
"historic": hasattr(method, "historic")} "historic": hasattr(method, "historic")}
return opts return opts
def _verify_hook(self, hook, hookmethod):
super(PytestPluginManager, self)._verify_hook(hook, hookmethod)
if "__multicall__" in hookmethod.argnames:
fslineno = _pytest._code.getfslineno(hookmethod.function)
warning = dict(code="I1",
fslocation=fslineno,
nodeid=None,
message="%r hook uses deprecated __multicall__ "
"argument" % (hook.name))
self._warn(warning)
def register(self, plugin, name=None): def register(self, plugin, name=None):
if name in ['pytest_catchlog', 'pytest_capturelog']:
self._warn('{0} plugin has been merged into the core, '
'please remove it from your requirements.'.format(
name.replace('_', '-')))
return
ret = super(PytestPluginManager, self).register(plugin, name) ret = super(PytestPluginManager, self).register(plugin, name)
if ret: if ret:
self.hook.pytest_plugin_registered.call_historic( self.hook.pytest_plugin_registered.call_historic(
@ -281,6 +278,7 @@ class PytestPluginManager(PluginManager):
config.addinivalue_line("markers", config.addinivalue_line("markers",
"trylast: mark a hook implementation function such that the " "trylast: mark a hook implementation function such that the "
"plugin machinery will try to call it last/as late as possible.") "plugin machinery will try to call it last/as late as possible.")
self._configured = True
def _warn(self, message): def _warn(self, message):
kwargs = message if isinstance(message, dict) else { kwargs = message if isinstance(message, dict) else {
@ -371,6 +369,9 @@ class PytestPluginManager(PluginManager):
_ensure_removed_sysmodule(conftestpath.purebasename) _ensure_removed_sysmodule(conftestpath.purebasename)
try: try:
mod = conftestpath.pyimport() mod = conftestpath.pyimport()
if hasattr(mod, 'pytest_plugins') and self._configured:
from _pytest.deprecated import PYTEST_PLUGINS_FROM_NON_TOP_LEVEL_CONFTEST
warnings.warn(PYTEST_PLUGINS_FROM_NON_TOP_LEVEL_CONFTEST)
except Exception: except Exception:
raise ConftestImportFailure(conftestpath, sys.exc_info()) raise ConftestImportFailure(conftestpath, sys.exc_info())
@ -382,7 +383,7 @@ class PytestPluginManager(PluginManager):
if path and path.relto(dirpath) or path == dirpath: if path and path.relto(dirpath) or path == dirpath:
assert mod not in mods assert mod not in mods
mods.append(mod) mods.append(mod)
self.trace("loaded conftestmodule %r" %(mod)) self.trace("loaded conftestmodule %r" % (mod))
self.consider_conftest(mod) self.consider_conftest(mod)
return mod return mod
@ -392,7 +393,7 @@ class PytestPluginManager(PluginManager):
# #
def consider_preparse(self, args): def consider_preparse(self, args):
for opt1,opt2 in zip(args, args[1:]): for opt1, opt2 in zip(args, args[1:]):
if opt1 == "-p": if opt1 == "-p":
self.consider_pluginarg(opt2) self.consider_pluginarg(opt2)
@ -424,9 +425,9 @@ class PytestPluginManager(PluginManager):
# "terminal" or "capture". Those plugins are registered under their # "terminal" or "capture". Those plugins are registered under their
# basename for historic purposes but must be imported with the # basename for historic purposes but must be imported with the
# _pytest prefix. # _pytest prefix.
assert isinstance(modname, (py.builtin.text, str)), "module name as text required, got %r" % modname assert isinstance(modname, (six.text_type, str)), "module name as text required, got %r" % modname
modname = str(modname) modname = str(modname)
if self.get_plugin(modname) is not None: if self.is_blocked(modname) or self.get_plugin(modname) is not None:
return return
if modname in builtin_plugins: if modname in builtin_plugins:
importspec = "_pytest." + modname importspec = "_pytest." + modname
@ -436,17 +437,14 @@ class PytestPluginManager(PluginManager):
try: try:
__import__(importspec) __import__(importspec)
except ImportError as e: except ImportError as e:
new_exc = ImportError('Error importing plugin "%s": %s' % (modname, safe_str(e.args[0]))) new_exc_type = ImportError
# copy over name and path attributes new_exc_message = 'Error importing plugin "%s": %s' % (modname, safe_str(e.args[0]))
for attr in ('name', 'path'): new_exc = new_exc_type(new_exc_message)
if hasattr(e, attr):
setattr(new_exc, attr, getattr(e, attr)) six.reraise(new_exc_type, new_exc, sys.exc_info()[2])
raise new_exc
except Exception as e: except Skipped as e:
import pytest self._warn("skipped plugin %r: %s" % ((modname, e.msg)))
if not hasattr(pytest, 'skip') or not isinstance(e, pytest.skip.Exception):
raise
self._warn("skipped plugin %r: %s" %((modname, e.msg)))
else: else:
mod = sys.modules[importspec] mod = sys.modules[importspec]
self.register(mod, modname) self.register(mod, modname)
@ -470,7 +468,7 @@ def _get_plugin_specs_as_list(specs):
return [] return []
class Parser: class Parser(object):
""" Parser for command line arguments and ini-file values. """ Parser for command line arguments and ini-file values.
:ivar extra_info: dict of generic param -> value to display in case :ivar extra_info: dict of generic param -> value to display in case
@ -511,7 +509,7 @@ class Parser:
for i, grp in enumerate(self._groups): for i, grp in enumerate(self._groups):
if grp.name == after: if grp.name == after:
break break
self._groups.insert(i+1, group) self._groups.insert(i + 1, group)
return group return group
def addoption(self, *opts, **attrs): def addoption(self, *opts, **attrs):
@ -549,7 +547,7 @@ class Parser:
a = option.attrs() a = option.attrs()
arggroup.add_argument(*n, **a) arggroup.add_argument(*n, **a)
# bash like autocompletion for dirs (appending '/') # bash like autocompletion for dirs (appending '/')
optparser.add_argument(FILE_OR_DIR, nargs='*').completer=filescompleter optparser.add_argument(FILE_OR_DIR, nargs='*').completer = filescompleter
return optparser return optparser
def parse_setoption(self, args, option, namespace=None): def parse_setoption(self, args, option, namespace=None):
@ -605,7 +603,7 @@ class ArgumentError(Exception):
return self.msg return self.msg
class Argument: class Argument(object):
"""class that mimics the necessary behaviour of optparse.Option """class that mimics the necessary behaviour of optparse.Option
its currently a least effort implementation its currently a least effort implementation
@ -637,7 +635,7 @@ class Argument:
pass pass
else: else:
# this might raise a keyerror as well, don't want to catch that # this might raise a keyerror as well, don't want to catch that
if isinstance(typ, py.builtin._basestring): if isinstance(typ, six.string_types):
if typ == 'choice': if typ == 'choice':
warnings.warn( warnings.warn(
'type argument to addoption() is a string %r.' 'type argument to addoption() is a string %r.'
@ -693,7 +691,7 @@ class Argument:
if self._attrs.get('help'): if self._attrs.get('help'):
a = self._attrs['help'] a = self._attrs['help']
a = a.replace('%default', '%(default)s') a = a.replace('%default', '%(default)s')
#a = a.replace('%prog', '%(prog)s') # a = a.replace('%prog', '%(prog)s')
self._attrs['help'] = a self._attrs['help'] = a
return self._attrs return self._attrs
@ -735,7 +733,7 @@ class Argument:
return 'Argument({0})'.format(', '.join(args)) return 'Argument({0})'.format(', '.join(args))
class OptionGroup: class OptionGroup(object):
def __init__(self, name, description="", parser=None): def __init__(self, name, description="", parser=None):
self.name = name self.name = name
self.description = description self.description = description
@ -805,6 +803,7 @@ class DropShorterLongHelpFormatter(argparse.HelpFormatter):
- shortcut if there are only two options and one of them is a short one - shortcut if there are only two options and one of them is a short one
- cache result on action object as this is called at least 2 times - cache result on action object as this is called at least 2 times
""" """
def _format_action_invocation(self, action): def _format_action_invocation(self, action):
orgstr = argparse.HelpFormatter._format_action_invocation(self, action) orgstr = argparse.HelpFormatter._format_action_invocation(self, action)
if orgstr and orgstr[0] != '-': # only optional arguments if orgstr and orgstr[0] != '-': # only optional arguments
@ -836,7 +835,7 @@ class DropShorterLongHelpFormatter(argparse.HelpFormatter):
short_long[shortened] = xxoption short_long[shortened] = xxoption
# now short_long has been filled out to the longest with dashes # now short_long has been filled out to the longest with dashes
# **and** we keep the right option ordering from add_argument # **and** we keep the right option ordering from add_argument
for option in options: # for option in options:
if len(option) == 2 or option[2] == ' ': if len(option) == 2 or option[2] == ' ':
return_list.append(option) return_list.append(option)
if option[2:] == short_long.get(option.replace('-', '')): if option[2:] == short_long.get(option.replace('-', '')):
@ -845,23 +844,14 @@ class DropShorterLongHelpFormatter(argparse.HelpFormatter):
return action._formatted_action_invocation return action._formatted_action_invocation
def _ensure_removed_sysmodule(modname): def _ensure_removed_sysmodule(modname):
try: try:
del sys.modules[modname] del sys.modules[modname]
except KeyError: except KeyError:
pass pass
class CmdOptions(object):
""" holds cmdline options as attributes."""
def __init__(self, values=()):
self.__dict__.update(values)
def __repr__(self):
return "<CmdOptions %r>" %(self.__dict__,)
def copy(self):
return CmdOptions(self.__dict__)
class Notset: class Notset(object):
def __repr__(self): def __repr__(self):
return "<NOTSET>" return "<NOTSET>"
@ -870,13 +860,25 @@ notset = Notset()
FILE_OR_DIR = 'file_or_dir' FILE_OR_DIR = 'file_or_dir'
def _iter_rewritable_modules(package_files):
for fn in package_files:
is_simple_module = '/' not in fn and fn.endswith('.py')
is_package = fn.count('/') == 1 and fn.endswith('__init__.py')
if is_simple_module:
module_name, _ = os.path.splitext(fn)
yield module_name
elif is_package:
package_name = os.path.dirname(fn)
yield package_name
class Config(object): class Config(object):
""" access to configuration values, pluginmanager and plugin hooks. """ """ access to configuration values, pluginmanager and plugin hooks. """
def __init__(self, pluginmanager): def __init__(self, pluginmanager):
#: access to command line option as attributes. #: access to command line option as attributes.
#: (deprecated), use :py:func:`getoption() <_pytest.config.Config.getoption>` instead #: (deprecated), use :py:func:`getoption() <_pytest.config.Config.getoption>` instead
self.option = CmdOptions() self.option = argparse.Namespace()
_a = FILE_OR_DIR _a = FILE_OR_DIR
self._parser = Parser( self._parser = Parser(
usage="%%(prog)s [options] [%s] [%s] [...]" % (_a, _a), usage="%%(prog)s [options] [%s] [%s] [...]" % (_a, _a),
@ -945,9 +947,9 @@ class Config(object):
) )
res = self.hook.pytest_internalerror(excrepr=excrepr, res = self.hook.pytest_internalerror(excrepr=excrepr,
excinfo=excinfo) excinfo=excinfo)
if not py.builtin.any(res): if not any(res):
for line in str(excrepr).split("\n"): for line in str(excrepr).split("\n"):
sys.stderr.write("INTERNALERROR> %s\n" %line) sys.stderr.write("INTERNALERROR> %s\n" % line)
sys.stderr.flush() sys.stderr.flush()
def cwd_relative_nodeid(self, nodeid): def cwd_relative_nodeid(self, nodeid):
@ -980,8 +982,9 @@ class Config(object):
self.pluginmanager._set_initial_conftests(early_config.known_args_namespace) self.pluginmanager._set_initial_conftests(early_config.known_args_namespace)
def _initini(self, args): def _initini(self, args):
ns, unknown_args = self._parser.parse_known_and_unknown_args(args, namespace=self.option.copy()) ns, unknown_args = self._parser.parse_known_and_unknown_args(args, namespace=copy.copy(self.option))
r = determine_setup(ns.inifilename, ns.file_or_dir + unknown_args, warnfunc=self.warn) r = determine_setup(ns.inifilename, ns.file_or_dir + unknown_args, warnfunc=self.warn,
rootdir_cmd_arg=ns.rootdir or None)
self.rootdir, self.inifile, self.inicfg = r self.rootdir, self.inifile, self.inicfg = r
self._parser.extra_info['rootdir'] = self.rootdir self._parser.extra_info['rootdir'] = self.rootdir
self._parser.extra_info['inifile'] = self.inifile self._parser.extra_info['inifile'] = self.inifile
@ -991,10 +994,10 @@ class Config(object):
self._override_ini = ns.override_ini or () self._override_ini = ns.override_ini or ()
def _consider_importhook(self, args): def _consider_importhook(self, args):
"""Install the PEP 302 import hook if using assertion re-writing. """Install the PEP 302 import hook if using assertion rewriting.
Needs to parse the --assert=<mode> option from the commandline Needs to parse the --assert=<mode> option from the commandline
and find all the installed plugins to mark them for re-writing and find all the installed plugins to mark them for rewriting
by the importhook. by the importhook.
""" """
ns, unknown_args = self._parser.parse_known_and_unknown_args(args) ns, unknown_args = self._parser.parse_known_and_unknown_args(args)
@ -1006,7 +1009,7 @@ class Config(object):
mode = 'plain' mode = 'plain'
else: else:
self._mark_plugins_for_rewrite(hook) self._mark_plugins_for_rewrite(hook)
self._warn_about_missing_assertion(mode) _warn_about_missing_assertion(mode)
def _mark_plugins_for_rewrite(self, hook): def _mark_plugins_for_rewrite(self, hook):
""" """
@ -1030,45 +1033,22 @@ class Config(object):
for entry in entrypoint.dist._get_metadata(metadata) for entry in entrypoint.dist._get_metadata(metadata)
) )
for fn in package_files: for name in _iter_rewritable_modules(package_files):
is_simple_module = os.sep not in fn and fn.endswith('.py') hook.mark_rewrite(name)
is_package = fn.count(os.sep) == 1 and fn.endswith('__init__.py')
if is_simple_module:
module_name, ext = os.path.splitext(fn)
hook.mark_rewrite(module_name)
elif is_package:
package_name = os.path.dirname(fn)
hook.mark_rewrite(package_name)
def _warn_about_missing_assertion(self, mode):
try:
assert False
except AssertionError:
pass
else:
if mode == 'plain':
sys.stderr.write("WARNING: ASSERTIONS ARE NOT EXECUTED"
" and FAILING TESTS WILL PASS. Are you"
" using python -O?")
else:
sys.stderr.write("WARNING: assertions not in test modules or"
" plugins will be ignored"
" because assert statements are not executed "
"by the underlying Python interpreter "
"(are you using python -O?)\n")
def _preparse(self, args, addopts=True): def _preparse(self, args, addopts=True):
self._initini(args)
if addopts: if addopts:
args[:] = shlex.split(os.environ.get('PYTEST_ADDOPTS', '')) + args args[:] = shlex.split(os.environ.get('PYTEST_ADDOPTS', '')) + args
self._initini(args)
if addopts:
args[:] = self.getini("addopts") + args args[:] = self.getini("addopts") + args
self._checkversion() self._checkversion()
self._consider_importhook(args) self._consider_importhook(args)
self.pluginmanager.consider_preparse(args) self.pluginmanager.consider_preparse(args)
self.pluginmanager.load_setuptools_entrypoints('pytest11') self.pluginmanager.load_setuptools_entrypoints('pytest11')
self.pluginmanager.consider_env() self.pluginmanager.consider_env()
self.known_args_namespace = ns = self._parser.parse_known_args(args, namespace=self.option.copy()) self.known_args_namespace = ns = self._parser.parse_known_args(
confcutdir = self.known_args_namespace.confcutdir args, namespace=copy.copy(self.option))
if self.known_args_namespace.confcutdir is None and self.inifile: if self.known_args_namespace.confcutdir is None and self.inifile:
confcutdir = py.path.local(self.inifile).dirname confcutdir = py.path.local(self.inifile).dirname
self.known_args_namespace.confcutdir = confcutdir self.known_args_namespace.confcutdir = confcutdir
@ -1092,7 +1072,7 @@ class Config(object):
myver = pytest.__version__.split(".") myver = pytest.__version__.split(".")
if myver < ver: if myver < ver:
raise pytest.UsageError( raise pytest.UsageError(
"%s:%d: requires pytest-%s, actual pytest-%s'" %( "%s:%d: requires pytest-%s, actual pytest-%s'" % (
self.inicfg.config.path, self.inicfg.lineof('minversion'), self.inicfg.config.path, self.inicfg.lineof('minversion'),
minver, pytest.__version__)) minver, pytest.__version__))
@ -1142,7 +1122,7 @@ class Config(object):
try: try:
description, type, default = self._parser._inidict[name] description, type, default = self._parser._inidict[name]
except KeyError: except KeyError:
raise ValueError("unknown configuration value: %r" %(name,)) raise ValueError("unknown configuration value: %r" % (name,))
value = self._get_override_ini_value(name) value = self._get_override_ini_value(name)
if value is None: if value is None:
try: try:
@ -1155,10 +1135,10 @@ class Config(object):
return [] return []
if type == "pathlist": if type == "pathlist":
dp = py.path.local(self.inicfg.config.path).dirpath() dp = py.path.local(self.inicfg.config.path).dirpath()
l = [] values = []
for relpath in shlex.split(value): for relpath in shlex.split(value):
l.append(dp.join(relpath, abs=True)) values.append(dp.join(relpath, abs=True))
return l return values
elif type == "args": elif type == "args":
return shlex.split(value) return shlex.split(value)
elif type == "linelist": elif type == "linelist":
@ -1175,26 +1155,25 @@ class Config(object):
except KeyError: except KeyError:
return None return None
modpath = py.path.local(mod.__file__).dirpath() modpath = py.path.local(mod.__file__).dirpath()
l = [] values = []
for relroot in relroots: for relroot in relroots:
if not isinstance(relroot, py.path.local): if not isinstance(relroot, py.path.local):
relroot = relroot.replace("/", py.path.local.sep) relroot = relroot.replace("/", py.path.local.sep)
relroot = modpath.join(relroot, abs=True) relroot = modpath.join(relroot, abs=True)
l.append(relroot) values.append(relroot)
return l return values
def _get_override_ini_value(self, name): def _get_override_ini_value(self, name):
value = None value = None
# override_ini is a list of list, to support both -o foo1=bar1 foo2=bar2 and # override_ini is a list of "ini=value" options
# and -o foo1=bar1 -o foo2=bar2 options # always use the last item if multiple values are set for same ini-name,
# always use the last item if multiple value set for same ini-name,
# e.g. -o foo=bar1 -o foo=bar2 will set foo to bar2 # e.g. -o foo=bar1 -o foo=bar2 will set foo to bar2
for ini_config_list in self._override_ini: for ini_config in self._override_ini:
for ini_config in ini_config_list:
try: try:
(key, user_ini_value) = ini_config.split("=", 1) key, user_ini_value = ini_config.split("=", 1)
except ValueError: except ValueError:
raise UsageError("-o/--override-ini expects option=value style.") raise UsageError("-o/--override-ini expects option=value style.")
else:
if key == name: if key == name:
value = user_ini_value value = user_ini_value
return value return value
@ -1219,7 +1198,7 @@ class Config(object):
return default return default
if skip: if skip:
import pytest import pytest
pytest.skip("no %r option found" %(name,)) pytest.skip("no %r option found" % (name,))
raise ValueError("no option named %r" % (name,)) raise ValueError("no option named %r" % (name,))
def getvalue(self, name, path=None): def getvalue(self, name, path=None):
@ -1230,12 +1209,37 @@ class Config(object):
""" (deprecated, use getoption(skip=True)) """ """ (deprecated, use getoption(skip=True)) """
return self.getoption(name, skip=True) return self.getoption(name, skip=True)
def _assertion_supported():
try:
assert False
except AssertionError:
return True
else:
return False
def _warn_about_missing_assertion(mode):
if not _assertion_supported():
if mode == 'plain':
sys.stderr.write("WARNING: ASSERTIONS ARE NOT EXECUTED"
" and FAILING TESTS WILL PASS. Are you"
" using python -O?")
else:
sys.stderr.write("WARNING: assertions not in test modules or"
" plugins will be ignored"
" because assert statements are not executed "
"by the underlying Python interpreter "
"(are you using python -O?)\n")
def exists(path, ignore=EnvironmentError): def exists(path, ignore=EnvironmentError):
try: try:
return path.check() return path.check()
except ignore: except ignore:
return False return False
def getcfg(args, warnfunc=None): def getcfg(args, warnfunc=None):
""" """
Search the list of arguments for a valid ini-file for pytest, Search the list of arguments for a valid ini-file for pytest,
@ -1246,7 +1250,7 @@ def getcfg(args, warnfunc=None):
This parameter should be removed when pytest This parameter should be removed when pytest
adopts standard deprecation warnings (#1804). adopts standard deprecation warnings (#1804).
""" """
from _pytest.deprecated import SETUP_CFG_PYTEST from _pytest.deprecated import CFG_PYTEST_SECTION
inibasenames = ["pytest.ini", "tox.ini", "setup.cfg"] inibasenames = ["pytest.ini", "tox.ini", "setup.cfg"]
args = [x for x in args if not str(x).startswith("-")] args = [x for x in args if not str(x).startswith("-")]
if not args: if not args:
@ -1260,7 +1264,7 @@ def getcfg(args, warnfunc=None):
iniconfig = py.iniconfig.IniConfig(p) iniconfig = py.iniconfig.IniConfig(p)
if 'pytest' in iniconfig.sections: if 'pytest' in iniconfig.sections:
if inibasename == 'setup.cfg' and warnfunc: if inibasename == 'setup.cfg' and warnfunc:
warnfunc('C1', SETUP_CFG_PYTEST) warnfunc('C1', CFG_PYTEST_SECTION.format(filename=inibasename))
return base, p, iniconfig['pytest'] return base, p, iniconfig['pytest']
if inibasename == 'setup.cfg' and 'tool:pytest' in iniconfig.sections: if inibasename == 'setup.cfg' and 'tool:pytest' in iniconfig.sections:
return base, p, iniconfig['tool:pytest'] return base, p, iniconfig['tool:pytest']
@ -1319,12 +1323,20 @@ def get_dirs_from_args(args):
] ]
def determine_setup(inifile, args, warnfunc=None): def determine_setup(inifile, args, warnfunc=None, rootdir_cmd_arg=None):
dirs = get_dirs_from_args(args) dirs = get_dirs_from_args(args)
if inifile: if inifile:
iniconfig = py.iniconfig.IniConfig(inifile) iniconfig = py.iniconfig.IniConfig(inifile)
is_cfg_file = str(inifile).endswith('.cfg')
# TODO: [pytest] section in *.cfg files is depricated. Need refactoring.
sections = ['tool:pytest', 'pytest'] if is_cfg_file else ['pytest']
for section in sections:
try: try:
inicfg = iniconfig["pytest"] inicfg = iniconfig[section]
if is_cfg_file and section == 'pytest' and warnfunc:
from _pytest.deprecated import CFG_PYTEST_SECTION
warnfunc('C1', CFG_PYTEST_SECTION.format(filename=str(inifile)))
break
except KeyError: except KeyError:
inicfg = None inicfg = None
rootdir = get_common_ancestor(dirs) rootdir = get_common_ancestor(dirs)
@ -1339,9 +1351,14 @@ def determine_setup(inifile, args, warnfunc=None):
rootdir, inifile, inicfg = getcfg(dirs, warnfunc=warnfunc) rootdir, inifile, inicfg = getcfg(dirs, warnfunc=warnfunc)
if rootdir is None: if rootdir is None:
rootdir = get_common_ancestor([py.path.local(), ancestor]) rootdir = get_common_ancestor([py.path.local(), ancestor])
is_fs_root = os.path.splitdrive(str(rootdir))[1] == os.sep is_fs_root = os.path.splitdrive(str(rootdir))[1] == '/'
if is_fs_root: if is_fs_root:
rootdir = ancestor rootdir = ancestor
if rootdir_cmd_arg:
rootdir_abs_path = py.path.local(os.path.expandvars(rootdir_cmd_arg))
if not os.path.isdir(str(rootdir_abs_path)):
raise UsageError("Directory '{}' not found. Check your '--rootdir' option.".format(rootdir_abs_path))
rootdir = rootdir_abs_path
return rootdir, inifile, inicfg or {} return rootdir, inifile, inicfg or {}
@ -1361,7 +1378,7 @@ def setns(obj, dic):
else: else:
setattr(obj, name, value) setattr(obj, name, value)
obj.__all__.append(name) obj.__all__.append(name)
#if obj != pytest: # if obj != pytest:
# pytest.__all__.append(name) # pytest.__all__.append(name)
setattr(pytest, name, value) setattr(pytest, name, value)

View File

@ -2,7 +2,14 @@
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import pdb import pdb
import sys import sys
import os
from doctest import UnexpectedException
try:
from builtins import breakpoint # noqa
SUPPORTS_BREAKPOINT_BUILTIN = True
except ImportError:
SUPPORTS_BREAKPOINT_BUILTIN = False
def pytest_addoption(parser): def pytest_addoption(parser):
@ -27,12 +34,20 @@ def pytest_configure(config):
if config.getvalue("usepdb"): if config.getvalue("usepdb"):
config.pluginmanager.register(PdbInvoke(), 'pdbinvoke') config.pluginmanager.register(PdbInvoke(), 'pdbinvoke')
# Use custom Pdb class set_trace instead of default Pdb on breakpoint() call
if SUPPORTS_BREAKPOINT_BUILTIN:
_environ_pythonbreakpoint = os.environ.get('PYTHONBREAKPOINT', '')
if _environ_pythonbreakpoint == '':
sys.breakpointhook = pytestPDB.set_trace
old = (pdb.set_trace, pytestPDB._pluginmanager) old = (pdb.set_trace, pytestPDB._pluginmanager)
def fin(): def fin():
pdb.set_trace, pytestPDB._pluginmanager = old pdb.set_trace, pytestPDB._pluginmanager = old
pytestPDB._config = None pytestPDB._config = None
pytestPDB._pdb_cls = pdb.Pdb pytestPDB._pdb_cls = pdb.Pdb
if SUPPORTS_BREAKPOINT_BUILTIN:
sys.breakpointhook = sys.__breakpointhook__
pdb.set_trace = pytestPDB.set_trace pdb.set_trace = pytestPDB.set_trace
pytestPDB._pluginmanager = config.pluginmanager pytestPDB._pluginmanager = config.pluginmanager
@ -40,7 +55,8 @@ def pytest_configure(config):
pytestPDB._pdb_cls = pdb_cls pytestPDB._pdb_cls = pdb_cls
config._cleanup.append(fin) config._cleanup.append(fin)
class pytestPDB:
class pytestPDB(object):
""" Pseudo PDB that defers to the real pdb. """ """ Pseudo PDB that defers to the real pdb. """
_pluginmanager = None _pluginmanager = None
_config = None _config = None
@ -54,7 +70,7 @@ class pytestPDB:
if cls._pluginmanager is not None: if cls._pluginmanager is not None:
capman = cls._pluginmanager.getplugin("capturemanager") capman = cls._pluginmanager.getplugin("capturemanager")
if capman: if capman:
capman.suspendcapture(in_=True) capman.suspend_global_capture(in_=True)
tw = _pytest.config.create_terminal_writer(cls._config) tw = _pytest.config.create_terminal_writer(cls._config)
tw.line() tw.line()
tw.sep(">", "PDB set_trace (IO-capturing turned off)") tw.sep(">", "PDB set_trace (IO-capturing turned off)")
@ -62,11 +78,11 @@ class pytestPDB:
cls._pdb_cls().set_trace(frame) cls._pdb_cls().set_trace(frame)
class PdbInvoke: class PdbInvoke(object):
def pytest_exception_interact(self, node, call, report): def pytest_exception_interact(self, node, call, report):
capman = node.config.pluginmanager.getplugin("capturemanager") capman = node.config.pluginmanager.getplugin("capturemanager")
if capman: if capman:
out, err = capman.suspendcapture(in_=True) out, err = capman.suspend_global_capture(in_=True)
sys.stdout.write(out) sys.stdout.write(out)
sys.stdout.write(err) sys.stdout.write(err)
_enter_pdb(node, call.excinfo, report) _enter_pdb(node, call.excinfo, report)
@ -85,6 +101,18 @@ def _enter_pdb(node, excinfo, rep):
# for not completely clear reasons. # for not completely clear reasons.
tw = node.config.pluginmanager.getplugin("terminalreporter")._tw tw = node.config.pluginmanager.getplugin("terminalreporter")._tw
tw.line() tw.line()
showcapture = node.config.option.showcapture
for sectionname, content in (('stdout', rep.capstdout),
('stderr', rep.capstderr),
('log', rep.caplog)):
if showcapture in (sectionname, 'all') and content:
tw.sep(">", "captured " + sectionname)
if content[-1:] == "\n":
content = content[:-1]
tw.line(content)
tw.sep(">", "traceback") tw.sep(">", "traceback")
rep.toterminal(tw) rep.toterminal(tw)
tw.sep(">", "entering PDB") tw.sep(">", "entering PDB")
@ -95,10 +123,9 @@ def _enter_pdb(node, excinfo, rep):
def _postmortem_traceback(excinfo): def _postmortem_traceback(excinfo):
if isinstance(excinfo.value, UnexpectedException):
# A doctest.UnexpectedException is not useful for post_mortem. # A doctest.UnexpectedException is not useful for post_mortem.
# Use the underlying exception instead: # Use the underlying exception instead:
from doctest import UnexpectedException
if isinstance(excinfo.value, UnexpectedException):
return excinfo.value.exc_info[2] return excinfo.value.exc_info[2]
else: else:
return excinfo._excinfo[2] return excinfo._excinfo[2]

View File

@ -22,14 +22,18 @@ FUNCARG_PREFIX = (
'and scheduled to be removed in pytest 4.0. ' 'and scheduled to be removed in pytest 4.0. '
'Please remove the prefix and use the @pytest.fixture decorator instead.') 'Please remove the prefix and use the @pytest.fixture decorator instead.')
SETUP_CFG_PYTEST = '[pytest] section in setup.cfg files is deprecated, use [tool:pytest] instead.' CFG_PYTEST_SECTION = '[pytest] section in {filename} files is deprecated, use [tool:pytest] instead.'
GETFUNCARGVALUE = "use of getfuncargvalue is deprecated, use getfixturevalue" GETFUNCARGVALUE = "use of getfuncargvalue is deprecated, use getfixturevalue"
RESULT_LOG = '--result-log is deprecated and scheduled for removal in pytest 4.0' RESULT_LOG = (
'--result-log is deprecated and scheduled for removal in pytest 4.0.\n'
'See https://docs.pytest.org/en/latest/usage.html#creating-resultlog-format-files for more information.'
)
MARK_INFO_ATTRIBUTE = RemovedInPytest4Warning( MARK_INFO_ATTRIBUTE = RemovedInPytest4Warning(
"MarkInfo objects are deprecated as they contain the merged marks" "MarkInfo objects are deprecated as they contain the merged marks.\n"
"Please use node.iter_markers to iterate over markers correctly"
) )
MARK_PARAMETERSET_UNPACKING = RemovedInPytest4Warning( MARK_PARAMETERSET_UNPACKING = RemovedInPytest4Warning(
@ -37,3 +41,25 @@ MARK_PARAMETERSET_UNPACKING = RemovedInPytest4Warning(
" please use pytest.param(..., marks=...) instead.\n" " please use pytest.param(..., marks=...) instead.\n"
"For more details, see: https://docs.pytest.org/en/latest/parametrize.html" "For more details, see: https://docs.pytest.org/en/latest/parametrize.html"
) )
RECORD_XML_PROPERTY = (
'Fixture renamed from "record_xml_property" to "record_property" as user '
'properties are now available to all reporters.\n'
'"record_xml_property" is now deprecated.'
)
COLLECTOR_MAKEITEM = RemovedInPytest4Warning(
"pycollector makeitem was removed "
"as it is an accidentially leaked internal api"
)
METAFUNC_ADD_CALL = (
"Metafunc.addcall is deprecated and scheduled to be removed in pytest 4.0.\n"
"Please use Metafunc.parametrize instead."
)
PYTEST_PLUGINS_FROM_NON_TOP_LEVEL_CONFTEST = RemovedInPytest4Warning(
"Defining pytest_plugins in a non-top-level conftest is deprecated, "
"because it affects the entire directory tree in a non-explicit way.\n"
"Please move it to the top level conftest file instead."
)

View File

@ -2,6 +2,8 @@
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import traceback import traceback
import sys
import platform
import pytest import pytest
from _pytest._code.code import ExceptionInfo, ReprFileLocation, TerminalRepr from _pytest._code.code import ExceptionInfo, ReprFileLocation, TerminalRepr
@ -22,6 +24,10 @@ DOCTEST_REPORT_CHOICES = (
DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE, DOCTEST_REPORT_CHOICE_ONLY_FIRST_FAILURE,
) )
# Lazy definiton of runner class
RUNNER_CLASS = None
def pytest_addoption(parser): def pytest_addoption(parser):
parser.addini('doctest_optionflags', 'option flags for doctests', parser.addini('doctest_optionflags', 'option flags for doctests',
type="args", default=["ELLIPSIS"]) type="args", default=["ELLIPSIS"])
@ -44,17 +50,28 @@ def pytest_addoption(parser):
action="store_true", default=False, action="store_true", default=False,
help="ignore doctest ImportErrors", help="ignore doctest ImportErrors",
dest="doctest_ignore_import_errors") dest="doctest_ignore_import_errors")
group.addoption("--doctest-continue-on-failure",
action="store_true", default=False,
help="for a given doctest, continue to run after the first failure",
dest="doctest_continue_on_failure")
def pytest_collect_file(path, parent): def pytest_collect_file(path, parent):
config = parent.config config = parent.config
if path.ext == ".py": if path.ext == ".py":
if config.option.doctestmodules: if config.option.doctestmodules and not _is_setup_py(config, path, parent):
return DoctestModule(path, parent) return DoctestModule(path, parent)
elif _is_doctest(config, path, parent): elif _is_doctest(config, path, parent):
return DoctestTextfile(path, parent) return DoctestTextfile(path, parent)
def _is_setup_py(config, path, parent):
if path.basename != "setup.py":
return False
contents = path.read()
return 'setuptools' in contents or 'distutils' in contents
def _is_doctest(config, path, parent): def _is_doctest(config, path, parent):
if path.ext in ('.txt', '.rst') and parent.session.isinitpath(path): if path.ext in ('.txt', '.rst') and parent.session.isinitpath(path):
return True return True
@ -67,14 +84,63 @@ def _is_doctest(config, path, parent):
class ReprFailDoctest(TerminalRepr): class ReprFailDoctest(TerminalRepr):
def __init__(self, reprlocation, lines): def __init__(self, reprlocation_lines):
self.reprlocation = reprlocation # List of (reprlocation, lines) tuples
self.lines = lines self.reprlocation_lines = reprlocation_lines
def toterminal(self, tw): def toterminal(self, tw):
for line in self.lines: for reprlocation, lines in self.reprlocation_lines:
for line in lines:
tw.line(line) tw.line(line)
self.reprlocation.toterminal(tw) reprlocation.toterminal(tw)
class MultipleDoctestFailures(Exception):
def __init__(self, failures):
super(MultipleDoctestFailures, self).__init__()
self.failures = failures
def _init_runner_class():
import doctest
class PytestDoctestRunner(doctest.DebugRunner):
"""
Runner to collect failures. Note that the out variable in this case is
a list instead of a stdout-like object
"""
def __init__(self, checker=None, verbose=None, optionflags=0,
continue_on_failure=True):
doctest.DebugRunner.__init__(
self, checker=checker, verbose=verbose, optionflags=optionflags)
self.continue_on_failure = continue_on_failure
def report_failure(self, out, test, example, got):
failure = doctest.DocTestFailure(test, example, got)
if self.continue_on_failure:
out.append(failure)
else:
raise failure
def report_unexpected_exception(self, out, test, example, exc_info):
failure = doctest.UnexpectedException(test, example, exc_info)
if self.continue_on_failure:
out.append(failure)
else:
raise failure
return PytestDoctestRunner
def _get_runner(checker=None, verbose=None, optionflags=0,
continue_on_failure=True):
# We need this in order to do a lazy import on doctest
global RUNNER_CLASS
if RUNNER_CLASS is None:
RUNNER_CLASS = _init_runner_class()
return RUNNER_CLASS(
checker=checker, verbose=verbose, optionflags=optionflags,
continue_on_failure=continue_on_failure)
class DoctestItem(pytest.Item): class DoctestItem(pytest.Item):
@ -95,51 +161,76 @@ class DoctestItem(pytest.Item):
def runtest(self): def runtest(self):
_check_all_skipped(self.dtest) _check_all_skipped(self.dtest)
self.runner.run(self.dtest) self._disable_output_capturing_for_darwin()
failures = []
self.runner.run(self.dtest, out=failures)
if failures:
raise MultipleDoctestFailures(failures)
def _disable_output_capturing_for_darwin(self):
"""
Disable output capturing. Otherwise, stdout is lost to doctest (#985)
"""
if platform.system() != 'Darwin':
return
capman = self.config.pluginmanager.getplugin("capturemanager")
if capman:
out, err = capman.suspend_global_capture(in_=True)
sys.stdout.write(out)
sys.stderr.write(err)
def repr_failure(self, excinfo): def repr_failure(self, excinfo):
import doctest import doctest
failures = None
if excinfo.errisinstance((doctest.DocTestFailure, if excinfo.errisinstance((doctest.DocTestFailure,
doctest.UnexpectedException)): doctest.UnexpectedException)):
doctestfailure = excinfo.value failures = [excinfo.value]
example = doctestfailure.example elif excinfo.errisinstance(MultipleDoctestFailures):
test = doctestfailure.test failures = excinfo.value.failures
if failures is not None:
reprlocation_lines = []
for failure in failures:
example = failure.example
test = failure.test
filename = test.filename filename = test.filename
if test.lineno is None: if test.lineno is None:
lineno = None lineno = None
else: else:
lineno = test.lineno + example.lineno + 1 lineno = test.lineno + example.lineno + 1
message = excinfo.type.__name__ message = type(failure).__name__
reprlocation = ReprFileLocation(filename, lineno, message) reprlocation = ReprFileLocation(filename, lineno, message)
checker = _get_checker() checker = _get_checker()
report_choice = _get_report_choice(self.config.getoption("doctestreport")) report_choice = _get_report_choice(self.config.getoption("doctestreport"))
if lineno is not None: if lineno is not None:
lines = doctestfailure.test.docstring.splitlines(False) lines = failure.test.docstring.splitlines(False)
# add line numbers to the left of the error message # add line numbers to the left of the error message
lines = ["%03d %s" % (i + test.lineno + 1, x) lines = ["%03d %s" % (i + test.lineno + 1, x)
for (i, x) in enumerate(lines)] for (i, x) in enumerate(lines)]
# trim docstring error lines to 10 # trim docstring error lines to 10
lines = lines[example.lineno - 9:example.lineno + 1] lines = lines[max(example.lineno - 9, 0):example.lineno + 1]
else: else:
lines = ['EXAMPLE LOCATION UNKNOWN, not showing all tests of that example'] lines = ['EXAMPLE LOCATION UNKNOWN, not showing all tests of that example']
indent = '>>>' indent = '>>>'
for line in example.source.splitlines(): for line in example.source.splitlines():
lines.append('??? %s %s' % (indent, line)) lines.append('??? %s %s' % (indent, line))
indent = '...' indent = '...'
if excinfo.errisinstance(doctest.DocTestFailure): if isinstance(failure, doctest.DocTestFailure):
lines += checker.output_difference(example, lines += checker.output_difference(example,
doctestfailure.got, report_choice).split("\n") failure.got,
report_choice).split("\n")
else: else:
inner_excinfo = ExceptionInfo(excinfo.value.exc_info) inner_excinfo = ExceptionInfo(failure.exc_info)
lines += ["UNEXPECTED EXCEPTION: %s" % lines += ["UNEXPECTED EXCEPTION: %s" %
repr(inner_excinfo.value)] repr(inner_excinfo.value)]
lines += traceback.format_exception(*excinfo.value.exc_info) lines += traceback.format_exception(*failure.exc_info)
return ReprFailDoctest(reprlocation, lines) reprlocation_lines.append((reprlocation, lines))
return ReprFailDoctest(reprlocation_lines)
else: else:
return super(DoctestItem, self).repr_failure(excinfo) return super(DoctestItem, self).repr_failure(excinfo)
def reportinfo(self): def reportinfo(self):
return self.fspath, None, "[doctest] %s" % self.name return self.fspath, self.dtest.lineno, "[doctest] %s" % self.name
def _get_flag_lookup(): def _get_flag_lookup():
@ -163,6 +254,17 @@ def get_optionflags(parent):
flag_acc |= flag_lookup_table[flag] flag_acc |= flag_lookup_table[flag]
return flag_acc return flag_acc
def _get_continue_on_failure(config):
continue_on_failure = config.getvalue('doctest_continue_on_failure')
if continue_on_failure:
# We need to turn off this if we use pdb since we should stop at
# the first failure
if config.getvalue("usepdb"):
continue_on_failure = False
return continue_on_failure
class DoctestTextfile(pytest.Module): class DoctestTextfile(pytest.Module):
obj = None obj = None
@ -178,8 +280,11 @@ class DoctestTextfile(pytest.Module):
globs = {'__name__': '__main__'} globs = {'__name__': '__main__'}
optionflags = get_optionflags(self) optionflags = get_optionflags(self)
runner = doctest.DebugRunner(verbose=0, optionflags=optionflags,
checker=_get_checker()) runner = _get_runner(
verbose=0, optionflags=optionflags,
checker=_get_checker(),
continue_on_failure=_get_continue_on_failure(self.config))
_fix_spoof_python2(runner, encoding) _fix_spoof_python2(runner, encoding)
parser = doctest.DocTestParser() parser = doctest.DocTestParser()
@ -214,8 +319,10 @@ class DoctestModule(pytest.Module):
# uses internal doctest module parsing mechanism # uses internal doctest module parsing mechanism
finder = doctest.DocTestFinder() finder = doctest.DocTestFinder()
optionflags = get_optionflags(self) optionflags = get_optionflags(self)
runner = doctest.DebugRunner(verbose=0, optionflags=optionflags, runner = _get_runner(
checker=_get_checker()) verbose=0, optionflags=optionflags,
checker=_get_checker(),
continue_on_failure=_get_continue_on_failure(self.config))
for test in finder.find(module, module.__name__): for test in finder.find(module, module.__name__):
if test.examples: # skip empty doctests if test.examples: # skip empty doctests
@ -355,6 +462,6 @@ def _fix_spoof_python2(runner, encoding):
@pytest.fixture(scope='session') @pytest.fixture(scope='session')
def doctest_namespace(): def doctest_namespace():
""" """
Inject names into the doctest namespace. Fixture that returns a :py:class:`dict` that will be injected into the namespace of doctests.
""" """
return dict() return dict()

View File

@ -1,13 +1,18 @@
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import sys
import functools
import inspect
import sys
import warnings
from collections import OrderedDict, deque, defaultdict
from more_itertools import flatten
import attr
import py
from py._code.code import FormattedExcinfo from py._code.code import FormattedExcinfo
import py
import warnings
import inspect
import _pytest import _pytest
from _pytest import nodes
from _pytest._code.code import TerminalRepr from _pytest._code.code import TerminalRepr
from _pytest.compat import ( from _pytest.compat import (
NOTSET, exc_clear, _format_args, NOTSET, exc_clear, _format_args,
@ -15,16 +20,26 @@ from _pytest.compat import (
is_generator, isclass, getimfunc, is_generator, isclass, getimfunc,
getlocation, getfuncargnames, getlocation, getfuncargnames,
safe_getattr, safe_getattr,
FuncargnamesCompatAttr,
) )
from _pytest.runner import fail from _pytest.outcomes import fail, TEST_OUTCOME
from _pytest.compat import FuncargnamesCompatAttr
@attr.s(frozen=True)
class PseudoFixtureDef(object):
cached_result = attr.ib()
scope = attr.ib()
def pytest_sessionstart(session): def pytest_sessionstart(session):
import _pytest.python import _pytest.python
import _pytest.nodes
scopename2class.update({ scopename2class.update({
'class': _pytest.python.Class, 'class': _pytest.python.Class,
'module': _pytest.python.Module, 'module': _pytest.python.Module,
'function': _pytest.main.Item, 'function': _pytest.nodes.Item,
'session': _pytest.main.Session,
}) })
session._fixturemanager = FixtureManager(session) session._fixturemanager = FixtureManager(session)
@ -38,6 +53,7 @@ scope2props["class"] = scope2props["module"] + ("cls",)
scope2props["instance"] = scope2props["class"] + ("instance", ) scope2props["instance"] = scope2props["class"] + ("instance", )
scope2props["function"] = scope2props["instance"] + ("function", "keywords") scope2props["function"] = scope2props["instance"] + ("function", "keywords")
def scopeproperty(name=None, doc=None): def scopeproperty(name=None, doc=None):
def decoratescope(func): def decoratescope(func):
scopename = name or func.__name__ scopename = name or func.__name__
@ -55,8 +71,6 @@ def scopeproperty(name=None, doc=None):
def get_scope_node(node, scope): def get_scope_node(node, scope):
cls = scopename2class.get(scope) cls = scopename2class.get(scope)
if cls is None: if cls is None:
if scope == "session":
return node.session
raise ValueError("unknown scope") raise ValueError("unknown scope")
return node.getparent(cls) return node.getparent(cls)
@ -114,19 +128,17 @@ def add_funcarg_pseudo_fixture_def(collector, metafunc, fixturemanager):
node._name2pseudofixturedef[argname] = fixturedef node._name2pseudofixturedef[argname] = fixturedef
def getfixturemarker(obj): def getfixturemarker(obj):
""" return fixturemarker or None if it doesn't exist or raised """ return fixturemarker or None if it doesn't exist or raised
exceptions.""" exceptions."""
try: try:
return getattr(obj, "_pytestfixturefunction", None) return getattr(obj, "_pytestfixturefunction", None)
except Exception: except TEST_OUTCOME:
# some objects raise errors like request (from flask import request) # some objects raise errors like request (from flask import request)
# we don't expect them to be fixture functions # we don't expect them to be fixture functions
return None return None
def get_parametrized_fixture_keys(item, scopenum): def get_parametrized_fixture_keys(item, scopenum):
""" return list of keys for all parametrized arguments which match """ return list of keys for all parametrized arguments which match
the specified scope. """ the specified scope. """
@ -136,10 +148,10 @@ def get_parametrized_fixture_keys(item, scopenum):
except AttributeError: except AttributeError:
pass pass
else: else:
# cs.indictes.items() is random order of argnames but # cs.indices.items() is random order of argnames. Need to
# then again different functions (items) can change order of # sort this so that different calls to
# arguments so it doesn't matter much probably # get_parametrized_fixture_keys will be deterministic.
for argname, param_index in cs.indices.items(): for argname, param_index in sorted(cs.indices.items()):
if cs._arg2scopenum[argname] != scopenum: if cs._arg2scopenum[argname] != scopenum:
continue continue
if scopenum == 0: # session if scopenum == 0: # session
@ -158,61 +170,59 @@ def get_parametrized_fixture_keys(item, scopenum):
def reorder_items(items): def reorder_items(items):
argkeys_cache = {} argkeys_cache = {}
items_by_argkey = {}
for scopenum in range(0, scopenum_function): for scopenum in range(0, scopenum_function):
argkeys_cache[scopenum] = d = {} argkeys_cache[scopenum] = d = {}
items_by_argkey[scopenum] = item_d = defaultdict(deque)
for item in items: for item in items:
keys = set(get_parametrized_fixture_keys(item, scopenum)) keys = OrderedDict.fromkeys(get_parametrized_fixture_keys(item, scopenum))
if keys: if keys:
d[item] = keys d[item] = keys
return reorder_items_atscope(items, set(), argkeys_cache, 0) for key in keys:
item_d[key].append(item)
items = OrderedDict.fromkeys(items)
return list(reorder_items_atscope(items, argkeys_cache, items_by_argkey, 0))
def reorder_items_atscope(items, ignore, argkeys_cache, scopenum):
def fix_cache_order(item, argkeys_cache, items_by_argkey):
for scopenum in range(0, scopenum_function):
for key in argkeys_cache[scopenum].get(item, []):
items_by_argkey[scopenum][key].appendleft(item)
def reorder_items_atscope(items, argkeys_cache, items_by_argkey, scopenum):
if scopenum >= scopenum_function or len(items) < 3: if scopenum >= scopenum_function or len(items) < 3:
return items return items
items_done = [] ignore = set()
while 1: items_deque = deque(items)
items_before, items_same, items_other, newignore = \ items_done = OrderedDict()
slice_items(items, ignore, argkeys_cache[scopenum]) scoped_items_by_argkey = items_by_argkey[scopenum]
items_before = reorder_items_atscope( scoped_argkeys_cache = argkeys_cache[scopenum]
items_before, ignore, argkeys_cache,scopenum+1) while items_deque:
if items_same is None: no_argkey_group = OrderedDict()
# nothing to reorder in this scope slicing_argkey = None
assert items_other is None while items_deque:
return items_done + items_before item = items_deque.popleft()
items_done.extend(items_before) if item in items_done or item in no_argkey_group:
items = items_same + items_other continue
ignore = newignore argkeys = OrderedDict.fromkeys(k for k in scoped_argkeys_cache.get(item, []) if k not in ignore)
if not argkeys:
no_argkey_group[item] = None
def slice_items(items, ignore, scoped_argkeys_cache):
# we pick the first item which uses a fixture instance in the
# requested scope and which we haven't seen yet. We slice the input
# items list into a list of items_nomatch, items_same and
# items_other
if scoped_argkeys_cache: # do we need to do work at all?
it = iter(items)
# first find a slicing key
for i, item in enumerate(it):
argkeys = scoped_argkeys_cache.get(item)
if argkeys is not None:
argkeys = argkeys.difference(ignore)
if argkeys: # found a slicing key
slicing_argkey = argkeys.pop()
items_before = items[:i]
items_same = [item]
items_other = []
# now slice the remainder of the list
for item in it:
argkeys = scoped_argkeys_cache.get(item)
if argkeys and slicing_argkey in argkeys and \
slicing_argkey not in ignore:
items_same.append(item)
else: else:
items_other.append(item) slicing_argkey, _ = argkeys.popitem()
newignore = ignore.copy() # we don't have to remove relevant items from later in the deque because they'll just be ignored
newignore.add(slicing_argkey) matching_items = [i for i in scoped_items_by_argkey[slicing_argkey] if i in items]
return (items_before, items_same, items_other, newignore) for i in reversed(matching_items):
return items, None, None, None fix_cache_order(i, argkeys_cache, items_by_argkey)
items_deque.appendleft(i)
break
if no_argkey_group:
no_argkey_group = reorder_items_atscope(
no_argkey_group, argkeys_cache, items_by_argkey, scopenum + 1)
for item in no_argkey_group:
items_done[item] = None
ignore.add(slicing_argkey)
return items_done
def fillfixtures(function): def fillfixtures(function):
@ -237,11 +247,11 @@ def fillfixtures(function):
request._fillfixtures() request._fillfixtures()
def get_direct_param_fixture_func(request): def get_direct_param_fixture_func(request):
return request.param return request.param
class FuncFixtureInfo:
class FuncFixtureInfo(object):
def __init__(self, argnames, names_closure, name2fixturedefs): def __init__(self, argnames, names_closure, name2fixturedefs):
self.argnames = argnames self.argnames = argnames
self.names_closure = names_closure self.names_closure = names_closure
@ -262,7 +272,6 @@ class FixtureRequest(FuncargnamesCompatAttr):
self.fixturename = None self.fixturename = None
#: Scope string, one of "function", "class", "module", "session" #: Scope string, one of "function", "class", "module", "session"
self.scope = "function" self.scope = "function"
self._fixture_values = {} # argname -> fixture value
self._fixture_defs = {} # argname -> FixtureDef self._fixture_defs = {} # argname -> FixtureDef
fixtureinfo = pyfuncitem._fixtureinfo fixtureinfo = pyfuncitem._fixtureinfo
self._arg2fixturedefs = fixtureinfo.name2fixturedefs.copy() self._arg2fixturedefs = fixtureinfo.name2fixturedefs.copy()
@ -279,7 +288,6 @@ class FixtureRequest(FuncargnamesCompatAttr):
""" underlying collection node (depends on current request scope)""" """ underlying collection node (depends on current request scope)"""
return self._getscopeitem(self.scope) return self._getscopeitem(self.scope)
def _getnextfixturedef(self, argname): def _getnextfixturedef(self, argname):
fixturedefs = self._arg2fixturedefs.get(argname, None) fixturedefs = self._arg2fixturedefs.get(argname, None)
if fixturedefs is None: if fixturedefs is None:
@ -301,7 +309,6 @@ class FixtureRequest(FuncargnamesCompatAttr):
""" the pytest config object associated with this request. """ """ the pytest config object associated with this request. """
return self._pyfuncitem.config return self._pyfuncitem.config
@scopeproperty() @scopeproperty()
def function(self): def function(self):
""" test function object if the request has a per-function scope. """ """ test function object if the request has a per-function scope. """
@ -365,10 +372,7 @@ class FixtureRequest(FuncargnamesCompatAttr):
:arg marker: a :py:class:`_pytest.mark.MarkDecorator` object :arg marker: a :py:class:`_pytest.mark.MarkDecorator` object
created by a call to ``pytest.mark.NAME(...)``. created by a call to ``pytest.mark.NAME(...)``.
""" """
try: self.node.add_marker(marker)
self.node.keywords[marker.markname] = marker
except AttributeError:
raise ValueError(marker)
def raiseerror(self, msg): def raiseerror(self, msg):
""" raise a FixtureLookupError with the given message. """ """ raise a FixtureLookupError with the given message. """
@ -428,7 +432,8 @@ class FixtureRequest(FuncargnamesCompatAttr):
from _pytest import deprecated from _pytest import deprecated
warnings.warn( warnings.warn(
deprecated.GETFUNCARGVALUE, deprecated.GETFUNCARGVALUE,
DeprecationWarning) DeprecationWarning,
stacklevel=2)
return self.getfixturevalue(argname) return self.getfixturevalue(argname)
def _get_active_fixturedef(self, argname): def _get_active_fixturedef(self, argname):
@ -439,30 +444,35 @@ class FixtureRequest(FuncargnamesCompatAttr):
fixturedef = self._getnextfixturedef(argname) fixturedef = self._getnextfixturedef(argname)
except FixtureLookupError: except FixtureLookupError:
if argname == "request": if argname == "request":
class PseudoFixtureDef:
cached_result = (self, [0], None) cached_result = (self, [0], None)
scope = "function" scope = "function"
return PseudoFixtureDef return PseudoFixtureDef(cached_result, scope)
raise raise
# remove indent to prevent the python3 exception # remove indent to prevent the python3 exception
# from leaking into the call # from leaking into the call
result = self._getfixturevalue(fixturedef) self._compute_fixture_value(fixturedef)
self._fixture_values[argname] = result
self._fixture_defs[argname] = fixturedef self._fixture_defs[argname] = fixturedef
return fixturedef return fixturedef
def _get_fixturestack(self): def _get_fixturestack(self):
current = self current = self
l = [] values = []
while 1: while 1:
fixturedef = getattr(current, "_fixturedef", None) fixturedef = getattr(current, "_fixturedef", None)
if fixturedef is None: if fixturedef is None:
l.reverse() values.reverse()
return l return values
l.append(fixturedef) values.append(fixturedef)
current = current._parent_request current = current._parent_request
def _getfixturevalue(self, fixturedef): def _compute_fixture_value(self, fixturedef):
"""
Creates a SubRequest based on "self" and calls the execute method of the given fixturedef object. This will
force the FixtureDef object to throw away any previous results and compute a new fixture value, which
will be stored into the FixtureDef object itself.
:param FixtureDef fixturedef:
"""
# prepare a subrequest object before calling fixture function # prepare a subrequest object before calling fixture function
# (latter managed by fixturedef) # (latter managed by fixturedef)
argname = fixturedef.argname argname = fixturedef.argname
@ -511,12 +521,11 @@ class FixtureRequest(FuncargnamesCompatAttr):
exc_clear() exc_clear()
try: try:
# call the fixture function # call the fixture function
val = fixturedef.execute(request=subrequest) fixturedef.execute(request=subrequest)
finally: finally:
# if fixture function failed it might have registered finalizers # if fixture function failed it might have registered finalizers
self.session._setupstate.addfinalizer(fixturedef.finish, self.session._setupstate.addfinalizer(functools.partial(fixturedef.finish, request=subrequest),
subrequest.node) subrequest.node)
return val
def _check_scope(self, argname, invoking_scope, requested_scope): def _check_scope(self, argname, invoking_scope, requested_scope):
if argname == "request": if argname == "request":
@ -549,16 +558,17 @@ class FixtureRequest(FuncargnamesCompatAttr):
if node is None and scope == "class": if node is None and scope == "class":
# fallback to function item itself # fallback to function item itself
node = self._pyfuncitem node = self._pyfuncitem
assert node assert node, 'Could not obtain a node for scope "{}" for function {!r}'.format(scope, self._pyfuncitem)
return node return node
def __repr__(self): def __repr__(self):
return "<FixtureRequest for %r>" %(self.node) return "<FixtureRequest for %r>" % (self.node)
class SubRequest(FixtureRequest): class SubRequest(FixtureRequest):
""" a sub request for handling getting a fixture from a """ a sub request for handling getting a fixture from a
test function/fixture. """ test function/fixture. """
def __init__(self, request, scope, param, param_index, fixturedef): def __init__(self, request, scope, param, param_index, fixturedef):
self._parent_request = request self._parent_request = request
self.fixturename = fixturedef.argname self.fixturename = fixturedef.argname
@ -567,9 +577,7 @@ class SubRequest(FixtureRequest):
self.param_index = param_index self.param_index = param_index
self.scope = scope self.scope = scope
self._fixturedef = fixturedef self._fixturedef = fixturedef
self.addfinalizer = fixturedef.addfinalizer
self._pyfuncitem = request._pyfuncitem self._pyfuncitem = request._pyfuncitem
self._fixture_values = request._fixture_values
self._fixture_defs = request._fixture_defs self._fixture_defs = request._fixture_defs
self._arg2fixturedefs = request._arg2fixturedefs self._arg2fixturedefs = request._arg2fixturedefs
self._arg2index = request._arg2index self._arg2index = request._arg2index
@ -578,6 +586,9 @@ class SubRequest(FixtureRequest):
def __repr__(self): def __repr__(self):
return "<SubRequest %r for %r>" % (self.fixturename, self._pyfuncitem) return "<SubRequest %r for %r>" % (self.fixturename, self._pyfuncitem)
def addfinalizer(self, finalizer):
self._fixturedef.addfinalizer(finalizer)
class ScopeMismatchError(Exception): class ScopeMismatchError(Exception):
""" A fixture function tries to use a different fixture function which """ A fixture function tries to use a different fixture function which
@ -609,6 +620,7 @@ def scope2index(scope, descr, where=None):
class FixtureLookupError(LookupError): class FixtureLookupError(LookupError):
""" could not return a requested Fixture (missing or invalid). """ """ could not return a requested Fixture (missing or invalid). """
def __init__(self, argname, request, msg=None): def __init__(self, argname, request, msg=None):
self.argname = argname self.argname = argname
self.request = request self.request = request
@ -631,9 +643,9 @@ class FixtureLookupError(LookupError):
lines, _ = inspect.getsourcelines(get_real_func(function)) lines, _ = inspect.getsourcelines(get_real_func(function))
except (IOError, IndexError, TypeError): except (IOError, IndexError, TypeError):
error_msg = "file %s, line %s: source code not available" error_msg = "file %s, line %s: source code not available"
addline(error_msg % (fspath, lineno+1)) addline(error_msg % (fspath, lineno + 1))
else: else:
addline("file %s, line %s" % (fspath, lineno+1)) addline("file %s, line %s" % (fspath, lineno + 1))
for i, line in enumerate(lines): for i, line in enumerate(lines):
line = line.rstrip() line = line.rstrip()
addline(" " + line) addline(" " + line)
@ -649,7 +661,7 @@ class FixtureLookupError(LookupError):
if faclist and name not in available: if faclist and name not in available:
available.append(name) available.append(name)
msg = "fixture %r not found" % (self.argname,) msg = "fixture %r not found" % (self.argname,)
msg += "\n available fixtures: %s" %(", ".join(sorted(available)),) msg += "\n available fixtures: %s" % (", ".join(sorted(available)),)
msg += "\n use 'pytest --fixtures [testpath]' for help on them." msg += "\n use 'pytest --fixtures [testpath]' for help on them."
return FixtureLookupErrorRepr(fspath, lineno, tblines, msg, self.argname) return FixtureLookupErrorRepr(fspath, lineno, tblines, msg, self.argname)
@ -675,12 +687,12 @@ class FixtureLookupErrorRepr(TerminalRepr):
tw.line('{0} {1}'.format(FormattedExcinfo.flow_marker, tw.line('{0} {1}'.format(FormattedExcinfo.flow_marker,
line.strip()), red=True) line.strip()), red=True)
tw.line() tw.line()
tw.line("%s:%d" % (self.filename, self.firstlineno+1)) tw.line("%s:%d" % (self.filename, self.firstlineno + 1))
def fail_fixturefunc(fixturefunc, msg): def fail_fixturefunc(fixturefunc, msg):
fs, lineno = getfslineno(fixturefunc) fs, lineno = getfslineno(fixturefunc)
location = "%s:%s" % (fs, lineno+1) location = "%s:%s" % (fs, lineno + 1)
source = _pytest._code.Source(fixturefunc) source = _pytest._code.Source(fixturefunc)
fail(msg + ":\n\n" + str(source.indent()) + "\n" + location, fail(msg + ":\n\n" + str(source.indent()) + "\n" + location,
pytrace=False) pytrace=False)
@ -707,8 +719,9 @@ def call_fixture_func(fixturefunc, request, kwargs):
return res return res
class FixtureDef: class FixtureDef(object):
""" A container for a factory definition. """ """ A container for a factory definition. """
def __init__(self, fixturemanager, baseid, argname, func, scope, params, def __init__(self, fixturemanager, baseid, argname, func, scope, params,
unittest=False, ids=None): unittest=False, ids=None):
self._fixturemanager = fixturemanager self._fixturemanager = fixturemanager
@ -723,23 +736,22 @@ class FixtureDef:
where=baseid where=baseid
) )
self.params = params self.params = params
startindex = unittest and 1 or None self.argnames = getfuncargnames(func, is_method=unittest)
self.argnames = getfuncargnames(func, startindex=startindex)
self.unittest = unittest self.unittest = unittest
self.ids = ids self.ids = ids
self._finalizer = [] self._finalizers = []
def addfinalizer(self, finalizer): def addfinalizer(self, finalizer):
self._finalizer.append(finalizer) self._finalizers.append(finalizer)
def finish(self): def finish(self, request):
exceptions = [] exceptions = []
try: try:
while self._finalizer: while self._finalizers:
try: try:
func = self._finalizer.pop() func = self._finalizers.pop()
func() func()
except: except: # noqa
exceptions.append(sys.exc_info()) exceptions.append(sys.exc_info())
if exceptions: if exceptions:
e = exceptions[0] e = exceptions[0]
@ -747,12 +759,15 @@ class FixtureDef:
py.builtin._reraise(*e) py.builtin._reraise(*e)
finally: finally:
ihook = self._fixturemanager.session.ihook hook = self._fixturemanager.session.gethookproxy(request.node.fspath)
ihook.pytest_fixture_post_finalizer(fixturedef=self) hook.pytest_fixture_post_finalizer(fixturedef=self, request=request)
# even if finalization fails, we invalidate # even if finalization fails, we invalidate
# the cached fixture value # the cached fixture value and remove
# all finalizers because they may be bound methods which will
# keep instances alive
if hasattr(self, "cached_result"): if hasattr(self, "cached_result"):
del self.cached_result del self.cached_result
self._finalizers = []
def execute(self, request): def execute(self, request):
# get required arguments and register our own finish() # get required arguments and register our own finish()
@ -760,7 +775,7 @@ class FixtureDef:
for argname in self.argnames: for argname in self.argnames:
fixturedef = request._get_active_fixturedef(argname) fixturedef = request._get_active_fixturedef(argname)
if argname != "request": if argname != "request":
fixturedef.addfinalizer(self.finish) fixturedef.addfinalizer(functools.partial(self.finish, request=request))
my_cache_key = request.param_index my_cache_key = request.param_index
cached_result = getattr(self, "cached_result", None) cached_result = getattr(self, "cached_result", None)
@ -773,16 +788,17 @@ class FixtureDef:
return result return result
# we have a previous but differently parametrized fixture instance # we have a previous but differently parametrized fixture instance
# so we need to tear it down before creating a new one # so we need to tear it down before creating a new one
self.finish() self.finish(request)
assert not hasattr(self, "cached_result") assert not hasattr(self, "cached_result")
ihook = self._fixturemanager.session.ihook hook = self._fixturemanager.session.gethookproxy(request.node.fspath)
return ihook.pytest_fixture_setup(fixturedef=self, request=request) return hook.pytest_fixture_setup(fixturedef=self, request=request)
def __repr__(self): def __repr__(self):
return ("<FixtureDef name=%r scope=%r baseid=%r >" % return ("<FixtureDef name=%r scope=%r baseid=%r >" %
(self.argname, self.scope, self.baseid)) (self.argname, self.scope, self.baseid))
def pytest_fixture_setup(fixturedef, request): def pytest_fixture_setup(fixturedef, request):
""" Execution of fixture setup. """ """ Execution of fixture setup. """
kwargs = {} kwargs = {}
@ -808,25 +824,34 @@ def pytest_fixture_setup(fixturedef, request):
my_cache_key = request.param_index my_cache_key = request.param_index
try: try:
result = call_fixture_func(fixturefunc, request, kwargs) result = call_fixture_func(fixturefunc, request, kwargs)
except Exception: except TEST_OUTCOME:
fixturedef.cached_result = (None, my_cache_key, sys.exc_info()) fixturedef.cached_result = (None, my_cache_key, sys.exc_info())
raise raise
fixturedef.cached_result = (result, my_cache_key, None) fixturedef.cached_result = (result, my_cache_key, None)
return result return result
class FixtureFunctionMarker: def _ensure_immutable_ids(ids):
def __init__(self, scope, params, autouse=False, ids=None, name=None): if ids is None:
self.scope = scope return
self.params = params if callable(ids):
self.autouse = autouse return ids
self.ids = ids return tuple(ids)
self.name = name
@attr.s(frozen=True)
class FixtureFunctionMarker(object):
scope = attr.ib()
params = attr.ib(converter=attr.converters.optional(tuple))
autouse = attr.ib(default=False)
ids = attr.ib(default=None, converter=_ensure_immutable_ids)
name = attr.ib(default=None)
def __call__(self, function): def __call__(self, function):
if isclass(function): if isclass(function):
raise ValueError( raise ValueError(
"class fixtures not supported (may be in the future)") "class fixtures not supported (may be in the future)")
if getattr(function, "_pytestfixturefunction", False): if getattr(function, "_pytestfixturefunction", False):
raise ValueError( raise ValueError(
"fixture is being applied more than once to the same function") "fixture is being applied more than once to the same function")
@ -835,9 +860,8 @@ class FixtureFunctionMarker:
return function return function
def fixture(scope="function", params=None, autouse=False, ids=None, name=None): def fixture(scope="function", params=None, autouse=False, ids=None, name=None):
""" (return a) decorator to mark a fixture factory function. """Decorator to mark a fixture factory function.
This decorator can be used (with or without parameters) to define a This decorator can be used (with or without parameters) to define a
fixture function. The name of the fixture function can later be fixture function. The name of the fixture function can later be
@ -874,7 +898,7 @@ def fixture(scope="function", params=None, autouse=False, ids=None, name=None):
instead of ``return``. In this case, the code block after the ``yield`` statement is executed instead of ``return``. In this case, the code block after the ``yield`` statement is executed
as teardown code regardless of the test outcome. A fixture function must yield exactly once. as teardown code regardless of the test outcome. A fixture function must yield exactly once.
""" """
if callable(scope) and params is None and autouse == False: if callable(scope) and params is None and autouse is False:
# direct decoration # direct decoration
return FixtureFunctionMarker( return FixtureFunctionMarker(
"function", params, autouse, name=name)(scope) "function", params, autouse, name=name)(scope)
@ -902,11 +926,19 @@ defaultfuncargprefixmarker = fixture()
@fixture(scope="session") @fixture(scope="session")
def pytestconfig(request): def pytestconfig(request):
""" the pytest config object with access to command line opts.""" """Session-scoped fixture that returns the :class:`_pytest.config.Config` object.
Example::
def test_foo(pytestconfig):
if pytestconfig.getoption("verbose"):
...
"""
return request.config return request.config
class FixtureManager: class FixtureManager(object):
""" """
pytest fixtures definitions and information is stored and managed pytest fixtures definitions and information is stored and managed
from this class. from this class.
@ -951,20 +983,14 @@ class FixtureManager:
self._nodeid_and_autousenames = [("", self.config.getini("usefixtures"))] self._nodeid_and_autousenames = [("", self.config.getini("usefixtures"))]
session.config.pluginmanager.register(self, "funcmanage") session.config.pluginmanager.register(self, "funcmanage")
def getfixtureinfo(self, node, func, cls, funcargs=True): def getfixtureinfo(self, node, func, cls, funcargs=True):
if funcargs and not hasattr(node, "nofuncargs"): if funcargs and not hasattr(node, "nofuncargs"):
if cls is not None: argnames = getfuncargnames(func, cls=cls)
startindex = 1
else:
startindex = None
argnames = getfuncargnames(func, startindex)
else: else:
argnames = () argnames = ()
usefixtures = getattr(func, "usefixtures", None) usefixtures = flatten(mark.args for mark in node.iter_markers() if mark.name == "usefixtures")
initialnames = argnames initialnames = argnames
if usefixtures is not None: initialnames = tuple(usefixtures) + initialnames
initialnames = usefixtures.args + initialnames
fm = node.session._fixturemanager fm = node.session._fixturemanager
names_closure, arg2fixturedefs = fm.getfixtureclosure(initialnames, names_closure, arg2fixturedefs = fm.getfixtureclosure(initialnames,
node) node)
@ -982,8 +1008,8 @@ class FixtureManager:
# by their test id) # by their test id)
if p.basename.startswith("conftest.py"): if p.basename.startswith("conftest.py"):
nodeid = p.dirpath().relto(self.config.rootdir) nodeid = p.dirpath().relto(self.config.rootdir)
if p.sep != "/": if p.sep != nodes.SEP:
nodeid = nodeid.replace(p.sep, "/") nodeid = nodeid.replace(p.sep, nodes.SEP)
self.parsefactories(plugin, nodeid) self.parsefactories(plugin, nodeid)
def _getautousenames(self, nodeid): def _getautousenames(self, nodeid):
@ -993,13 +1019,10 @@ class FixtureManager:
if nodeid.startswith(baseid): if nodeid.startswith(baseid):
if baseid: if baseid:
i = len(baseid) i = len(baseid)
nextchar = nodeid[i:i+1] nextchar = nodeid[i:i + 1]
if nextchar and nextchar not in ":/": if nextchar and nextchar not in ":/":
continue continue
autousenames.extend(basenames) autousenames.extend(basenames)
# make sure autousenames are sorted by scope, scopenum 0 is session
autousenames.sort(
key=lambda x: self._arg2fixturedefs[x][-1].scopenum)
return autousenames return autousenames
def getfixtureclosure(self, fixturenames, parentnode): def getfixtureclosure(self, fixturenames, parentnode):
@ -1030,6 +1053,16 @@ class FixtureManager:
if fixturedefs: if fixturedefs:
arg2fixturedefs[argname] = fixturedefs arg2fixturedefs[argname] = fixturedefs
merge(fixturedefs[-1].argnames) merge(fixturedefs[-1].argnames)
def sort_by_scope(arg_name):
try:
fixturedefs = arg2fixturedefs[arg_name]
except KeyError:
return scopes.index('function')
else:
return fixturedefs[-1].scopenum
fixturenames_closure.sort(key=sort_by_scope)
return fixturenames_closure, arg2fixturedefs return fixturenames_closure, arg2fixturedefs
def pytest_generate_tests(self, metafunc): def pytest_generate_tests(self, metafunc):
@ -1038,8 +1071,15 @@ class FixtureManager:
if faclist: if faclist:
fixturedef = faclist[-1] fixturedef = faclist[-1]
if fixturedef.params is not None: if fixturedef.params is not None:
func_params = getattr(getattr(metafunc.function, 'parametrize', None), 'args', [[None]]) parametrize_func = getattr(metafunc.function, 'parametrize', None)
if parametrize_func is not None:
parametrize_func = parametrize_func.combined
func_params = getattr(parametrize_func, 'args', [[None]])
func_kwargs = getattr(parametrize_func, 'kwargs', {})
# skip directly parametrized arguments # skip directly parametrized arguments
if "argnames" in func_kwargs:
argnames = parametrize_func.kwargs["argnames"]
else:
argnames = func_params[0] argnames = func_params[0]
if not isinstance(argnames, (tuple, list)): if not isinstance(argnames, (tuple, list)):
argnames = [x.strip() for x in argnames.split(",") if x.strip()] argnames = [x.strip() for x in argnames.split(",") if x.strip()]
@ -1128,6 +1168,5 @@ class FixtureManager:
def _matchfactories(self, fixturedefs, nodeid): def _matchfactories(self, fixturedefs, nodeid):
for fixturedef in fixturedefs: for fixturedef in fixturedefs:
if nodeid.startswith(fixturedef.baseid): if nodes.ischildnode(fixturedef.baseid, nodeid):
yield fixturedef yield fixturedef

View File

@ -5,7 +5,6 @@ pytest
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
def freeze_includes(): def freeze_includes():
""" """
Returns a list of module names used by py.test that should be Returns a list of module names used by py.test that should be

View File

@ -4,7 +4,8 @@ from __future__ import absolute_import, division, print_function
import py import py
import pytest import pytest
from _pytest.config import PrintHelp from _pytest.config import PrintHelp
import os, sys import os
import sys
from argparse import Action from argparse import Action
@ -44,7 +45,7 @@ def pytest_addoption(parser):
help="display pytest lib version and import information.") help="display pytest lib version and import information.")
group._addoption("-h", "--help", action=HelpAction, dest="help", group._addoption("-h", "--help", action=HelpAction, dest="help",
help="show help message and configuration info") help="show help message and configuration info")
group._addoption('-p', action="append", dest="plugins", default = [], group._addoption('-p', action="append", dest="plugins", default=[],
metavar="name", metavar="name",
help="early-load given plugin (multi-allowed). " help="early-load given plugin (multi-allowed). "
"To avoid loading of plugins, use the `no:` prefix, e.g. " "To avoid loading of plugins, use the `no:` prefix, e.g. "
@ -56,9 +57,9 @@ def pytest_addoption(parser):
action="store_true", dest="debug", default=False, action="store_true", dest="debug", default=False,
help="store internal tracing debug information in 'pytestdebug.log'.") help="store internal tracing debug information in 'pytestdebug.log'.")
group._addoption( group._addoption(
'-o', '--override-ini', nargs='*', dest="override_ini", '-o', '--override-ini', dest="override_ini",
action="append", action="append",
help="override config option with option=value style, e.g. `-o xfail_strict=True`.") help='override ini option with "option=value" style, e.g. `-o xfail_strict=True -o cache_dir=cache`.')
@pytest.hookimpl(hookwrapper=True) @pytest.hookimpl(hookwrapper=True)
@ -69,7 +70,7 @@ def pytest_cmdline_parse():
path = os.path.abspath("pytestdebug.log") path = os.path.abspath("pytestdebug.log")
debugfile = open(path, 'w') debugfile = open(path, 'w')
debugfile.write("versions pytest-%s, py-%s, " debugfile.write("versions pytest-%s, py-%s, "
"python-%s\ncwd=%s\nargs=%s\n\n" %( "python-%s\ncwd=%s\nargs=%s\n\n" % (
pytest.__version__, py.__version__, pytest.__version__, py.__version__,
".".join(map(str, sys.version_info)), ".".join(map(str, sys.version_info)),
os.getcwd(), config._origargs)) os.getcwd(), config._origargs))
@ -86,6 +87,7 @@ def pytest_cmdline_parse():
config.add_cleanup(unset_tracing) config.add_cleanup(unset_tracing)
def pytest_cmdline_main(config): def pytest_cmdline_main(config):
if config.option.version: if config.option.version:
p = py.path.local(pytest.__file__) p = py.path.local(pytest.__file__)
@ -102,6 +104,7 @@ def pytest_cmdline_main(config):
config._ensure_unconfigure() config._ensure_unconfigure()
return 0 return 0
def showhelp(config): def showhelp(config):
reporter = config.pluginmanager.get_plugin('terminalreporter') reporter = config.pluginmanager.get_plugin('terminalreporter')
tw = reporter._tw tw = reporter._tw
@ -117,7 +120,7 @@ def showhelp(config):
if type is None: if type is None:
type = "string" type = "string"
spec = "%s (%s)" % (name, type) spec = "%s (%s)" % (name, type)
line = " %-24s %s" %(spec, help) line = " %-24s %s" % (spec, help)
tw.line(line[:tw.fullwidth]) tw.line(line[:tw.fullwidth])
tw.line() tw.line()
@ -146,6 +149,7 @@ conftest_options = [
('pytest_plugins', 'list of plugin names to load'), ('pytest_plugins', 'list of plugin names to load'),
] ]
def getpluginversioninfo(config): def getpluginversioninfo(config):
lines = [] lines = []
plugininfo = config.pluginmanager.list_plugin_distinfo() plugininfo = config.pluginmanager.list_plugin_distinfo()
@ -157,11 +161,12 @@ def getpluginversioninfo(config):
lines.append(" " + content) lines.append(" " + content)
return lines return lines
def pytest_report_header(config): def pytest_report_header(config):
lines = [] lines = []
if config.option.debug or config.option.traceconfig: if config.option.debug or config.option.traceconfig:
lines.append("using: pytest-%s pylib-%s" % lines.append("using: pytest-%s pylib-%s" %
(pytest.__version__,py.__version__)) (pytest.__version__, py.__version__))
verinfo = getpluginversioninfo(config) verinfo = getpluginversioninfo(config)
if verinfo: if verinfo:
@ -175,5 +180,5 @@ def pytest_report_header(config):
r = plugin.__file__ r = plugin.__file__
else: else:
r = repr(plugin) r = repr(plugin)
lines.append(" %-20s: %s" %(name, r)) lines.append(" %-20s: %s" % (name, r))
return lines return lines

View File

@ -1,6 +1,6 @@
""" hook specifications for pytest plugins, invoked from main.py and builtin plugins. """ """ hook specifications for pytest plugins, invoked from main.py and builtin plugins. """
from _pytest._pluggy import HookspecMarker from pluggy import HookspecMarker
hookspec = HookspecMarker("pytest") hookspec = HookspecMarker("pytest")
@ -8,24 +8,44 @@ hookspec = HookspecMarker("pytest")
# Initialization hooks called for every plugin # Initialization hooks called for every plugin
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
@hookspec(historic=True) @hookspec(historic=True)
def pytest_addhooks(pluginmanager): def pytest_addhooks(pluginmanager):
"""called at plugin registration time to allow adding new hooks via a call to """called at plugin registration time to allow adding new hooks via a call to
pluginmanager.add_hookspecs(module_or_class, prefix).""" ``pluginmanager.add_hookspecs(module_or_class, prefix)``.
:param _pytest.config.PytestPluginManager pluginmanager: pytest plugin manager
.. note::
This hook is incompatible with ``hookwrapper=True``.
"""
@hookspec(historic=True) @hookspec(historic=True)
def pytest_namespace(): def pytest_namespace():
""" """
DEPRECATED: this hook causes direct monkeypatching on pytest, its use is strongly discouraged (**Deprecated**) this hook causes direct monkeypatching on pytest, its use is strongly discouraged
return dict of name->object to be made globally available in return dict of name->object to be made globally available in
the pytest namespace. This hook is called at plugin registration the pytest namespace.
time.
This hook is called at plugin registration time.
.. note::
This hook is incompatible with ``hookwrapper=True``.
""" """
@hookspec(historic=True) @hookspec(historic=True)
def pytest_plugin_registered(plugin, manager): def pytest_plugin_registered(plugin, manager):
""" a new pytest plugin got registered. """ """ a new pytest plugin got registered.
:param plugin: the plugin module or instance
:param _pytest.config.PytestPluginManager manager: pytest plugin manager
.. note::
This hook is incompatible with ``hookwrapper=True``.
"""
@hookspec(historic=True) @hookspec(historic=True)
@ -39,7 +59,7 @@ def pytest_addoption(parser):
files situated at the tests root directory due to how pytest files situated at the tests root directory due to how pytest
:ref:`discovers plugins during startup <pluginorder>`. :ref:`discovers plugins during startup <pluginorder>`.
:arg parser: To add command line options, call :arg _pytest.config.Parser parser: To add command line options, call
:py:func:`parser.addoption(...) <_pytest.config.Parser.addoption>`. :py:func:`parser.addoption(...) <_pytest.config.Parser.addoption>`.
To add ini-file values call :py:func:`parser.addini(...) To add ini-file values call :py:func:`parser.addini(...)
<_pytest.config.Parser.addini>`. <_pytest.config.Parser.addini>`.
@ -54,42 +74,89 @@ def pytest_addoption(parser):
a value read from an ini-style file. a value read from an ini-style file.
The config object is passed around on many internal objects via the ``.config`` The config object is passed around on many internal objects via the ``.config``
attribute or can be retrieved as the ``pytestconfig`` fixture or accessed attribute or can be retrieved as the ``pytestconfig`` fixture.
via (deprecated) ``pytest.config``.
.. note::
This hook is incompatible with ``hookwrapper=True``.
""" """
@hookspec(historic=True) @hookspec(historic=True)
def pytest_configure(config): def pytest_configure(config):
""" called after command line options have been parsed """
and all plugins and initial conftest files been loaded. Allows plugins and conftest files to perform initial configuration.
This hook is called for every plugin.
This hook is called for every plugin and initial conftest file
after command line options have been parsed.
After that, the hook is called for other conftest files as they are
imported.
.. note::
This hook is incompatible with ``hookwrapper=True``.
:arg _pytest.config.Config config: pytest config object
""" """
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
# Bootstrapping hooks called for plugins registered early enough: # Bootstrapping hooks called for plugins registered early enough:
# internal and 3rd party plugins as well as directly # internal and 3rd party plugins.
# discoverable conftest.py local plugins.
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
@hookspec(firstresult=True) @hookspec(firstresult=True)
def pytest_cmdline_parse(pluginmanager, args): def pytest_cmdline_parse(pluginmanager, args):
"""return initialized config object, parsing the specified args. """return initialized config object, parsing the specified args.
Stops at first non-None result, see :ref:`firstresult` """ Stops at first non-None result, see :ref:`firstresult`
.. note::
This hook will not be called for ``conftest.py`` files, only for setuptools plugins.
:param _pytest.config.PytestPluginManager pluginmanager: pytest plugin manager
:param list[str] args: list of arguments passed on the command line
"""
def pytest_cmdline_preparse(config, args): def pytest_cmdline_preparse(config, args):
"""(deprecated) modify command line arguments before option parsing. """ """(**Deprecated**) modify command line arguments before option parsing.
This hook is considered deprecated and will be removed in a future pytest version. Consider
using :func:`pytest_load_initial_conftests` instead.
.. note::
This hook will not be called for ``conftest.py`` files, only for setuptools plugins.
:param _pytest.config.Config config: pytest config object
:param list[str] args: list of arguments passed on the command line
"""
@hookspec(firstresult=True) @hookspec(firstresult=True)
def pytest_cmdline_main(config): def pytest_cmdline_main(config):
""" called for performing the main command line action. The default """ called for performing the main command line action. The default
implementation will invoke the configure hooks and runtest_mainloop. implementation will invoke the configure hooks and runtest_mainloop.
Stops at first non-None result, see :ref:`firstresult` """ .. note::
This hook will not be called for ``conftest.py`` files, only for setuptools plugins.
Stops at first non-None result, see :ref:`firstresult`
:param _pytest.config.Config config: pytest config object
"""
def pytest_load_initial_conftests(early_config, parser, args): def pytest_load_initial_conftests(early_config, parser, args):
""" implements the loading of initial conftest files ahead """ implements the loading of initial conftest files ahead
of command line option parsing. """ of command line option parsing.
.. note::
This hook will not be called for ``conftest.py`` files, only for setuptools plugins.
:param _pytest.config.Config early_config: pytest config object
:param list[str] args: list of arguments passed on the command line
:param _pytest.config.Parser parser: to add command line options
"""
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
@ -98,16 +165,30 @@ def pytest_load_initial_conftests(early_config, parser, args):
@hookspec(firstresult=True) @hookspec(firstresult=True)
def pytest_collection(session): def pytest_collection(session):
""" perform the collection protocol for the given session. """Perform the collection protocol for the given session.
Stops at first non-None result, see :ref:`firstresult`.
:param _pytest.main.Session session: the pytest session object
"""
Stops at first non-None result, see :ref:`firstresult` """
def pytest_collection_modifyitems(session, config, items): def pytest_collection_modifyitems(session, config, items):
""" called after collection has been performed, may filter or re-order """ called after collection has been performed, may filter or re-order
the items in-place.""" the items in-place.
:param _pytest.main.Session session: the pytest session object
:param _pytest.config.Config config: pytest config object
:param List[_pytest.nodes.Item] items: list of item objects
"""
def pytest_collection_finish(session): def pytest_collection_finish(session):
""" called after collection has been performed and modified. """ """ called after collection has been performed and modified.
:param _pytest.main.Session session: the pytest session object
"""
@hookspec(firstresult=True) @hookspec(firstresult=True)
def pytest_ignore_collect(path, config): def pytest_ignore_collect(path, config):
@ -116,31 +197,48 @@ def pytest_ignore_collect(path, config):
more specific hooks. more specific hooks.
Stops at first non-None result, see :ref:`firstresult` Stops at first non-None result, see :ref:`firstresult`
:param str path: the path to analyze
:param _pytest.config.Config config: pytest config object
""" """
@hookspec(firstresult=True) @hookspec(firstresult=True)
def pytest_collect_directory(path, parent): def pytest_collect_directory(path, parent):
""" called before traversing a directory for collection files. """ called before traversing a directory for collection files.
Stops at first non-None result, see :ref:`firstresult` """ Stops at first non-None result, see :ref:`firstresult`
:param str path: the path to analyze
"""
def pytest_collect_file(path, parent): def pytest_collect_file(path, parent):
""" return collection Node or None for the given path. Any new node """ return collection Node or None for the given path. Any new node
needs to have the specified ``parent`` as a parent.""" needs to have the specified ``parent`` as a parent.
:param str path: the path to collect
"""
# logging hooks for collection # logging hooks for collection
def pytest_collectstart(collector): def pytest_collectstart(collector):
""" collector starts collecting. """ """ collector starts collecting. """
def pytest_itemcollected(item): def pytest_itemcollected(item):
""" we just collected a test item. """ """ we just collected a test item. """
def pytest_collectreport(report): def pytest_collectreport(report):
""" collector finished collecting. """ """ collector finished collecting. """
def pytest_deselected(items): def pytest_deselected(items):
""" called for test items deselected by keyword. """ """ called for test items deselected by keyword. """
@hookspec(firstresult=True) @hookspec(firstresult=True)
def pytest_make_collect_report(collector): def pytest_make_collect_report(collector):
""" perform ``collector.collect()`` and return a CollectReport. """ perform ``collector.collect()`` and return a CollectReport.
@ -151,6 +249,7 @@ def pytest_make_collect_report(collector):
# Python test function related hooks # Python test function related hooks
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
@hookspec(firstresult=True) @hookspec(firstresult=True)
def pytest_pycollect_makemodule(path, parent): def pytest_pycollect_makemodule(path, parent):
""" return a Module collector or None for the given path. """ return a Module collector or None for the given path.
@ -160,42 +259,57 @@ def pytest_pycollect_makemodule(path, parent):
Stops at first non-None result, see :ref:`firstresult` """ Stops at first non-None result, see :ref:`firstresult` """
@hookspec(firstresult=True) @hookspec(firstresult=True)
def pytest_pycollect_makeitem(collector, name, obj): def pytest_pycollect_makeitem(collector, name, obj):
""" return custom item/collector for a python object in a module, or None. """ return custom item/collector for a python object in a module, or None.
Stops at first non-None result, see :ref:`firstresult` """ Stops at first non-None result, see :ref:`firstresult` """
@hookspec(firstresult=True) @hookspec(firstresult=True)
def pytest_pyfunc_call(pyfuncitem): def pytest_pyfunc_call(pyfuncitem):
""" call underlying test function. """ call underlying test function.
Stops at first non-None result, see :ref:`firstresult` """ Stops at first non-None result, see :ref:`firstresult` """
def pytest_generate_tests(metafunc): def pytest_generate_tests(metafunc):
""" generate (multiple) parametrized calls to a test function.""" """ generate (multiple) parametrized calls to a test function."""
@hookspec(firstresult=True) @hookspec(firstresult=True)
def pytest_make_parametrize_id(config, val, argname): def pytest_make_parametrize_id(config, val, argname):
"""Return a user-friendly string representation of the given ``val`` that will be used """Return a user-friendly string representation of the given ``val`` that will be used
by @pytest.mark.parametrize calls. Return None if the hook doesn't know about ``val``. by @pytest.mark.parametrize calls. Return None if the hook doesn't know about ``val``.
The parameter name is available as ``argname``, if required. The parameter name is available as ``argname``, if required.
Stops at first non-None result, see :ref:`firstresult` """ Stops at first non-None result, see :ref:`firstresult`
:param _pytest.config.Config config: pytest config object
:param val: the parametrized value
:param str argname: the automatic parameter name produced by pytest
"""
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
# generic runtest related hooks # generic runtest related hooks
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
@hookspec(firstresult=True) @hookspec(firstresult=True)
def pytest_runtestloop(session): def pytest_runtestloop(session):
""" called for performing the main runtest loop """ called for performing the main runtest loop
(after collection finished). (after collection finished).
Stops at first non-None result, see :ref:`firstresult` """ Stops at first non-None result, see :ref:`firstresult`
:param _pytest.main.Session session: the pytest session object
"""
def pytest_itemstart(item, node): def pytest_itemstart(item, node):
""" (deprecated, use pytest_runtest_logstart). """ """(**Deprecated**) use pytest_runtest_logstart. """
@hookspec(firstresult=True) @hookspec(firstresult=True)
def pytest_runtest_protocol(item, nextitem): def pytest_runtest_protocol(item, nextitem):
@ -214,15 +328,37 @@ def pytest_runtest_protocol(item, nextitem):
Stops at first non-None result, see :ref:`firstresult` """ Stops at first non-None result, see :ref:`firstresult` """
def pytest_runtest_logstart(nodeid, location): def pytest_runtest_logstart(nodeid, location):
""" signal the start of running a single test item. """ """ signal the start of running a single test item.
This hook will be called **before** :func:`pytest_runtest_setup`, :func:`pytest_runtest_call` and
:func:`pytest_runtest_teardown` hooks.
:param str nodeid: full id of the item
:param location: a triple of ``(filename, linenum, testname)``
"""
def pytest_runtest_logfinish(nodeid, location):
""" signal the complete finish of running a single test item.
This hook will be called **after** :func:`pytest_runtest_setup`, :func:`pytest_runtest_call` and
:func:`pytest_runtest_teardown` hooks.
:param str nodeid: full id of the item
:param location: a triple of ``(filename, linenum, testname)``
"""
def pytest_runtest_setup(item): def pytest_runtest_setup(item):
""" called before ``pytest_runtest_call(item)``. """ """ called before ``pytest_runtest_call(item)``. """
def pytest_runtest_call(item): def pytest_runtest_call(item):
""" called to execute the test ``item``. """ """ called to execute the test ``item``. """
def pytest_runtest_teardown(item, nextitem): def pytest_runtest_teardown(item, nextitem):
""" called after ``pytest_runtest_call``. """ called after ``pytest_runtest_call``.
@ -232,6 +368,7 @@ def pytest_runtest_teardown(item, nextitem):
so that nextitem only needs to call setup-functions. so that nextitem only needs to call setup-functions.
""" """
@hookspec(firstresult=True) @hookspec(firstresult=True)
def pytest_runtest_makereport(item, call): def pytest_runtest_makereport(item, call):
""" return a :py:class:`_pytest.runner.TestReport` object """ return a :py:class:`_pytest.runner.TestReport` object
@ -240,6 +377,7 @@ def pytest_runtest_makereport(item, call):
Stops at first non-None result, see :ref:`firstresult` """ Stops at first non-None result, see :ref:`firstresult` """
def pytest_runtest_logreport(report): def pytest_runtest_logreport(report):
""" process a test setup/call/teardown report relating to """ process a test setup/call/teardown report relating to
the respective phase of executing a test. """ the respective phase of executing a test. """
@ -248,13 +386,23 @@ def pytest_runtest_logreport(report):
# Fixture related hooks # Fixture related hooks
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
@hookspec(firstresult=True) @hookspec(firstresult=True)
def pytest_fixture_setup(fixturedef, request): def pytest_fixture_setup(fixturedef, request):
""" performs fixture setup execution. """ performs fixture setup execution.
Stops at first non-None result, see :ref:`firstresult` """ :return: The return value of the call to the fixture function
def pytest_fixture_post_finalizer(fixturedef): Stops at first non-None result, see :ref:`firstresult`
.. note::
If the fixture function returns None, other implementations of
this hook function will continue to be called, according to the
behavior of the :ref:`firstresult` option.
"""
def pytest_fixture_post_finalizer(fixturedef, request):
""" called after fixture teardown, but before the cache is cleared so """ called after fixture teardown, but before the cache is cleared so
the fixture result cache ``fixturedef.cached_result`` can the fixture result cache ``fixturedef.cached_result`` can
still be accessed.""" still be accessed."""
@ -263,14 +411,28 @@ def pytest_fixture_post_finalizer(fixturedef):
# test session related hooks # test session related hooks
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
def pytest_sessionstart(session): def pytest_sessionstart(session):
""" before session.main() is called. """ """ called after the ``Session`` object has been created and before performing collection
and entering the run test loop.
:param _pytest.main.Session session: the pytest session object
"""
def pytest_sessionfinish(session, exitstatus): def pytest_sessionfinish(session, exitstatus):
""" whole test run finishes. """ """ called after whole test run finished, right before returning the exit status to the system.
:param _pytest.main.Session session: the pytest session object
:param int exitstatus: the status which pytest will return to the system
"""
def pytest_unconfigure(config): def pytest_unconfigure(config):
""" called before test process is exited. """ """ called before test process is exited.
:param _pytest.config.Config config: pytest config object
"""
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
@ -284,14 +446,20 @@ def pytest_assertrepr_compare(config, op, left, right):
of strings. The strings will be joined by newlines but any newlines of strings. The strings will be joined by newlines but any newlines
*in* a string will be escaped. Note that all but the first line will *in* a string will be escaped. Note that all but the first line will
be indented slightly, the intention is for the first line to be a summary. be indented slightly, the intention is for the first line to be a summary.
:param _pytest.config.Config config: pytest config object
""" """
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
# hooks for influencing reporting (invoked from _pytest_terminal) # hooks for influencing reporting (invoked from _pytest_terminal)
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
def pytest_report_header(config, startdir): def pytest_report_header(config, startdir):
""" return a string to be displayed as header info for terminal reporting. """ return a string or list of strings to be displayed as header info for terminal reporting.
:param _pytest.config.Config config: pytest config object
:param startdir: py.path object with the starting dir
.. note:: .. note::
@ -300,26 +468,54 @@ def pytest_report_header(config, startdir):
:ref:`discovers plugins during startup <pluginorder>`. :ref:`discovers plugins during startup <pluginorder>`.
""" """
def pytest_report_collectionfinish(config, startdir, items):
"""
.. versionadded:: 3.2
return a string or list of strings to be displayed after collection has finished successfully.
This strings will be displayed after the standard "collected X items" message.
:param _pytest.config.Config config: pytest config object
:param startdir: py.path object with the starting dir
:param items: list of pytest items that are going to be executed; this list should not be modified.
"""
@hookspec(firstresult=True) @hookspec(firstresult=True)
def pytest_report_teststatus(report): def pytest_report_teststatus(report):
""" return result-category, shortletter and verbose word for reporting. """ return result-category, shortletter and verbose word for reporting.
Stops at first non-None result, see :ref:`firstresult` """ Stops at first non-None result, see :ref:`firstresult` """
def pytest_terminal_summary(terminalreporter, exitstatus): def pytest_terminal_summary(terminalreporter, exitstatus):
""" add additional section in terminal summary reporting. """ """Add a section to terminal summary reporting.
:param _pytest.terminal.TerminalReporter terminalreporter: the internal terminal reporter object
:param int exitstatus: the exit status that will be reported back to the OS
.. versionadded:: 3.5
The ``config`` parameter.
"""
@hookspec(historic=True) @hookspec(historic=True)
def pytest_logwarning(message, code, nodeid, fslocation): def pytest_logwarning(message, code, nodeid, fslocation):
""" process a warning specified by a message, a code string, """ process a warning specified by a message, a code string,
a nodeid and fslocation (both of which may be None a nodeid and fslocation (both of which may be None
if the warning is not tied to a partilar node/location).""" if the warning is not tied to a particular node/location).
.. note::
This hook is incompatible with ``hookwrapper=True``.
"""
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
# doctest hooks # doctest hooks
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
@hookspec(firstresult=True) @hookspec(firstresult=True)
def pytest_doctest_prepare_content(content): def pytest_doctest_prepare_content(content):
""" return processed content for a given doctest """ return processed content for a given doctest
@ -330,12 +526,15 @@ def pytest_doctest_prepare_content(content):
# error handling and internal debugging hooks # error handling and internal debugging hooks
# ------------------------------------------------------------------------- # -------------------------------------------------------------------------
def pytest_internalerror(excrepr, excinfo): def pytest_internalerror(excrepr, excinfo):
""" called for internal errors. """ """ called for internal errors. """
def pytest_keyboard_interrupt(excinfo): def pytest_keyboard_interrupt(excinfo):
""" called for keyboard interrupt. """ """ called for keyboard interrupt. """
def pytest_exception_interact(node, call, report): def pytest_exception_interact(node, call, report):
"""called when an exception was raised which can potentially be """called when an exception was raised which can potentially be
interactively handled. interactively handled.
@ -344,10 +543,10 @@ def pytest_exception_interact(node, call, report):
that is not an internal exception like ``skip.Exception``. that is not an internal exception like ``skip.Exception``.
""" """
def pytest_enter_pdb(config): def pytest_enter_pdb(config):
""" called upon pdb.set_trace(), can be used by plugins to take special """ called upon pdb.set_trace(), can be used by plugins to take special
action just before the python debugger enters in interactive mode. action just before the python debugger enters in interactive mode.
:arg config: pytest config object :param _pytest.config.Config config: pytest config object
:type config: _pytest.config.Config
""" """

View File

@ -1,254 +0,0 @@
Sorting per-resource
-----------------------------
for any given set of items:
- collect items per session-scoped parametrized funcarg
- re-order until items no parametrizations are mixed
examples:
test()
test1(s1)
test1(s2)
test2()
test3(s1)
test3(s2)
gets sorted to:
test()
test2()
test1(s1)
test3(s1)
test1(s2)
test3(s2)
the new @setup functions
--------------------------------------
Consider a given @setup-marked function::
@pytest.mark.setup(maxscope=SCOPE)
def mysetup(request, arg1, arg2, ...)
...
request.addfinalizer(fin)
...
then FUNCARGSET denotes the set of (arg1, arg2, ...) funcargs and
all of its dependent funcargs. The mysetup function will execute
for any matching test item once per scope.
The scope is determined as the minimum scope of all scopes of the args
in FUNCARGSET and the given "maxscope".
If mysetup has been called and no finalizers have been called it is
called "active".
Furthermore the following rules apply:
- if an arg value in FUNCARGSET is about to be torn down, the
mysetup-registered finalizers will execute as well.
- There will never be two active mysetup invocations.
Example 1, session scope::
@pytest.mark.funcarg(scope="session", params=[1,2])
def db(request):
request.addfinalizer(db_finalize)
@pytest.mark.setup
def mysetup(request, db):
request.addfinalizer(mysetup_finalize)
...
And a given test module:
def test_something():
...
def test_otherthing():
pass
Here is what happens::
db(request) executes with request.param == 1
mysetup(request, db) executes
test_something() executes
test_otherthing() executes
mysetup_finalize() executes
db_finalize() executes
db(request) executes with request.param == 2
mysetup(request, db) executes
test_something() executes
test_otherthing() executes
mysetup_finalize() executes
db_finalize() executes
Example 2, session/function scope::
@pytest.mark.funcarg(scope="session", params=[1,2])
def db(request):
request.addfinalizer(db_finalize)
@pytest.mark.setup(scope="function")
def mysetup(request, db):
...
request.addfinalizer(mysetup_finalize)
...
And a given test module:
def test_something():
...
def test_otherthing():
pass
Here is what happens::
db(request) executes with request.param == 1
mysetup(request, db) executes
test_something() executes
mysetup_finalize() executes
mysetup(request, db) executes
test_otherthing() executes
mysetup_finalize() executes
db_finalize() executes
db(request) executes with request.param == 2
mysetup(request, db) executes
test_something() executes
mysetup_finalize() executes
mysetup(request, db) executes
test_otherthing() executes
mysetup_finalize() executes
db_finalize() executes
Example 3 - funcargs session-mix
----------------------------------------
Similar with funcargs, an example::
@pytest.mark.funcarg(scope="session", params=[1,2])
def db(request):
request.addfinalizer(db_finalize)
@pytest.mark.funcarg(scope="function")
def table(request, db):
...
request.addfinalizer(table_finalize)
...
And a given test module:
def test_something(table):
...
def test_otherthing(table):
pass
def test_thirdthing():
pass
Here is what happens::
db(request) executes with param == 1
table(request, db)
test_something(table)
table_finalize()
table(request, db)
test_otherthing(table)
table_finalize()
db_finalize
db(request) executes with param == 2
table(request, db)
test_something(table)
table_finalize()
table(request, db)
test_otherthing(table)
table_finalize()
db_finalize
test_thirdthing()
Data structures
--------------------
pytest internally maintains a dict of active funcargs with cache, param,
finalizer, (scopeitem?) information:
active_funcargs = dict()
if a parametrized "db" is activated:
active_funcargs["db"] = FuncargInfo(dbvalue, paramindex,
FuncargFinalize(...), scopeitem)
if a test is torn down and the next test requires a differently
parametrized "db":
for argname in item.callspec.params:
if argname in active_funcargs:
funcarginfo = active_funcargs[argname]
if funcarginfo.param != item.callspec.params[argname]:
funcarginfo.callfinalizer()
del node2funcarg[funcarginfo.scopeitem]
del active_funcargs[argname]
nodes_to_be_torn_down = ...
for node in nodes_to_be_torn_down:
if node in node2funcarg:
argname = node2funcarg[node]
active_funcargs[argname].callfinalizer()
del node2funcarg[node]
del active_funcargs[argname]
if a test is setup requiring a "db" funcarg:
if "db" in active_funcargs:
return active_funcargs["db"][0]
funcarginfo = setup_funcarg()
active_funcargs["db"] = funcarginfo
node2funcarg[funcarginfo.scopeitem] = "db"
Implementation plan for resources
------------------------------------------
1. Revert FuncargRequest to the old form, unmerge item/request
(done)
2. make funcarg factories be discovered at collection time
3. Introduce funcarg marker
4. Introduce funcarg scope parameter
5. Introduce funcarg parametrize parameter
6. make setup functions be discovered at collection time
7. (Introduce a pytest_fixture_protocol/setup_funcargs hook)
methods and data structures
--------------------------------
A FuncarcManager holds all information about funcarg definitions
including parametrization and scope definitions. It implements
a pytest_generate_tests hook which performs parametrization as appropriate.
as a simple example, let's consider a tree where a test function requires
a "abc" funcarg and its factory defines it as parametrized and scoped
for Modules. When collections hits the function item, it creates
the metafunc object, and calls funcargdb.pytest_generate_tests(metafunc)
which looks up available funcarg factories and their scope and parametrization.
This information is equivalent to what can be provided today directly
at the function site and it should thus be relatively straight forward
to implement the additional way of defining parametrization/scoping.
conftest loading:
each funcarg-factory will populate the session.funcargmanager
When a test item is collected, it grows a dictionary
(funcargname2factorycalllist). A factory lookup is performed
for each required funcarg. The resulting factory call is stored
with the item. If a function is parametrized multiple items are
created with respective factory calls. Else if a factory is parametrized
multiple items and calls to the factory function are created as well.
At setup time, an item populates a funcargs mapping, mapping names
to values. If a value is funcarg factories are queried for a given item
test functions and setup functions are put in a class
which looks up required funcarg factories.

View File

@ -17,6 +17,7 @@ import re
import sys import sys
import time import time
import pytest import pytest
from _pytest import nodes
from _pytest.config import filename_arg from _pytest.config import filename_arg
# Python 2.X and 3.X compatibility # Python 2.X and 3.X compatibility
@ -84,6 +85,9 @@ class _NodeReporter(object):
def add_property(self, name, value): def add_property(self, name, value):
self.properties.append((str(name), bin_xml_escape(value))) self.properties.append((str(name), bin_xml_escape(value)))
def add_attribute(self, name, value):
self.attrs[str(name)] = bin_xml_escape(value)
def make_properties_node(self): def make_properties_node(self):
"""Return a Junit node containing custom properties, if any. """Return a Junit node containing custom properties, if any.
""" """
@ -97,6 +101,7 @@ class _NodeReporter(object):
def record_testreport(self, testreport): def record_testreport(self, testreport):
assert not self.testcase assert not self.testcase
names = mangle_test_address(testreport.nodeid) names = mangle_test_address(testreport.nodeid)
existing_attrs = self.attrs
classnames = names[:-1] classnames = names[:-1]
if self.xml.prefix: if self.xml.prefix:
classnames.insert(0, self.xml.prefix) classnames.insert(0, self.xml.prefix)
@ -110,6 +115,7 @@ class _NodeReporter(object):
if hasattr(testreport, "url"): if hasattr(testreport, "url"):
attrs["url"] = testreport.url attrs["url"] = testreport.url
self.attrs = attrs self.attrs = attrs
self.attrs.update(existing_attrs) # restore any user-defined attributes
def to_xml(self): def to_xml(self):
testcase = Junit.testcase(time=self.duration, **self.attrs) testcase = Junit.testcase(time=self.duration, **self.attrs)
@ -124,10 +130,47 @@ class _NodeReporter(object):
self.append(node) self.append(node)
def write_captured_output(self, report): def write_captured_output(self, report):
for capname in ('out', 'err'): content_out = report.capstdout
content = getattr(report, 'capstd' + capname) content_log = report.caplog
content_err = report.capstderr
if content_log or content_out:
if content_log and self.xml.logging == 'system-out':
if content_out:
# syncing stdout and the log-output is not done yet. It's
# probably not worth the effort. Therefore, first the captured
# stdout is shown and then the captured logs.
content = '\n'.join([
' Captured Stdout '.center(80, '-'),
content_out,
'',
' Captured Log '.center(80, '-'),
content_log])
else:
content = content_log
else:
content = content_out
if content: if content:
tag = getattr(Junit, 'system-' + capname) tag = getattr(Junit, 'system-out')
self.append(tag(bin_xml_escape(content)))
if content_log or content_err:
if content_log and self.xml.logging == 'system-err':
if content_err:
content = '\n'.join([
' Captured Stderr '.center(80, '-'),
content_err,
'',
' Captured Log '.center(80, '-'),
content_log])
else:
content = content_log
else:
content = content_err
if content:
tag = getattr(Junit, 'system-err')
self.append(tag(bin_xml_escape(content))) self.append(tag(bin_xml_escape(content)))
def append_pass(self, report): def append_pass(self, report):
@ -190,24 +233,56 @@ class _NodeReporter(object):
@pytest.fixture @pytest.fixture
def record_xml_property(request): def record_property(request):
"""Add extra xml properties to the tag for the calling test. """Add an extra properties the calling test.
User properties become part of the test report and are available to the
configured reporters, like JUnit XML.
The fixture is callable with ``(name, value)``, with value being automatically The fixture is callable with ``(name, value)``, with value being automatically
xml-encoded. xml-encoded.
Example::
def test_function(record_property):
record_property("example_key", 1)
"""
def append_property(name, value):
request.node.user_properties.append((name, value))
return append_property
@pytest.fixture
def record_xml_property(record_property):
"""(Deprecated) use record_property."""
import warnings
from _pytest import deprecated
warnings.warn(
deprecated.RECORD_XML_PROPERTY,
DeprecationWarning,
stacklevel=2
)
return record_property
@pytest.fixture
def record_xml_attribute(request):
"""Add extra xml attributes to the tag for the calling test.
The fixture is callable with ``(name, value)``, with value being
automatically xml-encoded
""" """
request.node.warn( request.node.warn(
code='C3', code='C3',
message='record_xml_property is an experimental feature', message='record_xml_attribute is an experimental feature',
) )
xml = getattr(request.config, "_xml", None) xml = getattr(request.config, "_xml", None)
if xml is not None: if xml is not None:
node_reporter = xml.node_reporter(request.node.nodeid) node_reporter = xml.node_reporter(request.node.nodeid)
return node_reporter.add_property return node_reporter.add_attribute
else: else:
def add_property_noop(name, value): def add_attr_noop(name, value):
pass pass
return add_property_noop return add_attr_noop
def pytest_addoption(parser): def pytest_addoption(parser):
@ -227,13 +302,18 @@ def pytest_addoption(parser):
default=None, default=None,
help="prepend prefix to classnames in junit-xml output") help="prepend prefix to classnames in junit-xml output")
parser.addini("junit_suite_name", "Test suite name for JUnit report", default="pytest") parser.addini("junit_suite_name", "Test suite name for JUnit report", default="pytest")
parser.addini("junit_logging", "Write captured log messages to JUnit report: "
"one of no|system-out|system-err",
default="no") # choices=['no', 'stdout', 'stderr'])
def pytest_configure(config): def pytest_configure(config):
xmlpath = config.option.xmlpath xmlpath = config.option.xmlpath
# prevent opening xmllog on slave nodes (xdist) # prevent opening xmllog on slave nodes (xdist)
if xmlpath and not hasattr(config, 'slaveinput'): if xmlpath and not hasattr(config, 'slaveinput'):
config._xml = LogXML(xmlpath, config.option.junitprefix, config.getini("junit_suite_name")) config._xml = LogXML(xmlpath, config.option.junitprefix,
config.getini("junit_suite_name"),
config.getini("junit_logging"))
config.pluginmanager.register(config._xml) config.pluginmanager.register(config._xml)
@ -252,7 +332,7 @@ def mangle_test_address(address):
except ValueError: except ValueError:
pass pass
# convert file path to dotted path # convert file path to dotted path
names[0] = names[0].replace("/", '.') names[0] = names[0].replace(nodes.SEP, '.')
names[0] = _py_ext_re.sub("", names[0]) names[0] = _py_ext_re.sub("", names[0])
# put any params back # put any params back
names[-1] += possible_open_bracket + params names[-1] += possible_open_bracket + params
@ -260,11 +340,12 @@ def mangle_test_address(address):
class LogXML(object): class LogXML(object):
def __init__(self, logfile, prefix, suite_name="pytest"): def __init__(self, logfile, prefix, suite_name="pytest", logging="no"):
logfile = os.path.expanduser(os.path.expandvars(logfile)) logfile = os.path.expanduser(os.path.expandvars(logfile))
self.logfile = os.path.normpath(os.path.abspath(logfile)) self.logfile = os.path.normpath(os.path.abspath(logfile))
self.prefix = prefix self.prefix = prefix
self.suite_name = suite_name self.suite_name = suite_name
self.logging = logging
self.stats = dict.fromkeys([ self.stats = dict.fromkeys([
'error', 'error',
'passed', 'passed',
@ -372,6 +453,10 @@ class LogXML(object):
if report.when == "teardown": if report.when == "teardown":
reporter = self._opentestcase(report) reporter = self._opentestcase(report)
reporter.write_captured_output(report) reporter.write_captured_output(report)
for propname, propvalue in report.user_properties:
reporter.add_property(propname, propvalue)
self.finalize(report) self.finalize(report)
report_wid = getattr(report, "worker_id", None) report_wid = getattr(report, "worker_id", None)
report_ii = getattr(report, "item_index", None) report_ii = getattr(report, "item_index", None)

522
_pytest/logging.py Normal file
View File

@ -0,0 +1,522 @@
""" Access and control log capturing. """
from __future__ import absolute_import, division, print_function
import logging
from contextlib import closing, contextmanager
import re
import six
from _pytest.config import create_terminal_writer
import pytest
import py
DEFAULT_LOG_FORMAT = '%(filename)-25s %(lineno)4d %(levelname)-8s %(message)s'
DEFAULT_LOG_DATE_FORMAT = '%H:%M:%S'
class ColoredLevelFormatter(logging.Formatter):
"""
Colorize the %(levelname)..s part of the log format passed to __init__.
"""
LOGLEVEL_COLOROPTS = {
logging.CRITICAL: {'red'},
logging.ERROR: {'red', 'bold'},
logging.WARNING: {'yellow'},
logging.WARN: {'yellow'},
logging.INFO: {'green'},
logging.DEBUG: {'purple'},
logging.NOTSET: set(),
}
LEVELNAME_FMT_REGEX = re.compile(r'%\(levelname\)([+-]?\d*s)')
def __init__(self, terminalwriter, *args, **kwargs):
super(ColoredLevelFormatter, self).__init__(
*args, **kwargs)
if six.PY2:
self._original_fmt = self._fmt
else:
self._original_fmt = self._style._fmt
self._level_to_fmt_mapping = {}
levelname_fmt_match = self.LEVELNAME_FMT_REGEX.search(self._fmt)
if not levelname_fmt_match:
return
levelname_fmt = levelname_fmt_match.group()
for level, color_opts in self.LOGLEVEL_COLOROPTS.items():
formatted_levelname = levelname_fmt % {
'levelname': logging.getLevelName(level)}
# add ANSI escape sequences around the formatted levelname
color_kwargs = {name: True for name in color_opts}
colorized_formatted_levelname = terminalwriter.markup(
formatted_levelname, **color_kwargs)
self._level_to_fmt_mapping[level] = self.LEVELNAME_FMT_REGEX.sub(
colorized_formatted_levelname,
self._fmt)
def format(self, record):
fmt = self._level_to_fmt_mapping.get(
record.levelno, self._original_fmt)
if six.PY2:
self._fmt = fmt
else:
self._style._fmt = fmt
return super(ColoredLevelFormatter, self).format(record)
def get_option_ini(config, *names):
for name in names:
ret = config.getoption(name) # 'default' arg won't work as expected
if ret is None:
ret = config.getini(name)
if ret:
return ret
def pytest_addoption(parser):
"""Add options to control log capturing."""
group = parser.getgroup('logging')
def add_option_ini(option, dest, default=None, type=None, **kwargs):
parser.addini(dest, default=default, type=type,
help='default value for ' + option)
group.addoption(option, dest=dest, **kwargs)
add_option_ini(
'--no-print-logs',
dest='log_print', action='store_const', const=False, default=True,
type='bool',
help='disable printing caught logs on failed tests.')
add_option_ini(
'--log-level',
dest='log_level', default=None,
help='logging level used by the logging module')
add_option_ini(
'--log-format',
dest='log_format', default=DEFAULT_LOG_FORMAT,
help='log format as used by the logging module.')
add_option_ini(
'--log-date-format',
dest='log_date_format', default=DEFAULT_LOG_DATE_FORMAT,
help='log date format as used by the logging module.')
parser.addini(
'log_cli', default=False, type='bool',
help='enable log display during test run (also known as "live logging").')
add_option_ini(
'--log-cli-level',
dest='log_cli_level', default=None,
help='cli logging level.')
add_option_ini(
'--log-cli-format',
dest='log_cli_format', default=None,
help='log format as used by the logging module.')
add_option_ini(
'--log-cli-date-format',
dest='log_cli_date_format', default=None,
help='log date format as used by the logging module.')
add_option_ini(
'--log-file',
dest='log_file', default=None,
help='path to a file when logging will be written to.')
add_option_ini(
'--log-file-level',
dest='log_file_level', default=None,
help='log file logging level.')
add_option_ini(
'--log-file-format',
dest='log_file_format', default=DEFAULT_LOG_FORMAT,
help='log format as used by the logging module.')
add_option_ini(
'--log-file-date-format',
dest='log_file_date_format', default=DEFAULT_LOG_DATE_FORMAT,
help='log date format as used by the logging module.')
@contextmanager
def catching_logs(handler, formatter=None, level=None):
"""Context manager that prepares the whole logging machinery properly."""
root_logger = logging.getLogger()
if formatter is not None:
handler.setFormatter(formatter)
if level is not None:
handler.setLevel(level)
# Adding the same handler twice would confuse logging system.
# Just don't do that.
add_new_handler = handler not in root_logger.handlers
if add_new_handler:
root_logger.addHandler(handler)
if level is not None:
orig_level = root_logger.level
root_logger.setLevel(min(orig_level, level))
try:
yield handler
finally:
if level is not None:
root_logger.setLevel(orig_level)
if add_new_handler:
root_logger.removeHandler(handler)
class LogCaptureHandler(logging.StreamHandler):
"""A logging handler that stores log records and the log text."""
def __init__(self):
"""Creates a new log handler."""
logging.StreamHandler.__init__(self, py.io.TextIO())
self.records = []
def emit(self, record):
"""Keep the log records in a list in addition to the log text."""
self.records.append(record)
logging.StreamHandler.emit(self, record)
def reset(self):
self.records = []
self.stream = py.io.TextIO()
class LogCaptureFixture(object):
"""Provides access and control of log capturing."""
def __init__(self, item):
"""Creates a new funcarg."""
self._item = item
self._initial_log_levels = {} # type: Dict[str, int] # dict of log name -> log level
def _finalize(self):
"""Finalizes the fixture.
This restores the log levels changed by :meth:`set_level`.
"""
# restore log levels
for logger_name, level in self._initial_log_levels.items():
logger = logging.getLogger(logger_name)
logger.setLevel(level)
@property
def handler(self):
"""
:rtype: LogCaptureHandler
"""
return self._item.catch_log_handler
def get_records(self, when):
"""
Get the logging records for one of the possible test phases.
:param str when:
Which test phase to obtain the records from. Valid values are: "setup", "call" and "teardown".
:rtype: List[logging.LogRecord]
:return: the list of captured records at the given stage
.. versionadded:: 3.4
"""
handler = self._item.catch_log_handlers.get(when)
if handler:
return handler.records
else:
return []
@property
def text(self):
"""Returns the log text."""
return self.handler.stream.getvalue()
@property
def records(self):
"""Returns the list of log records."""
return self.handler.records
@property
def record_tuples(self):
"""Returns a list of a striped down version of log records intended
for use in assertion comparison.
The format of the tuple is:
(logger_name, log_level, message)
"""
return [(r.name, r.levelno, r.getMessage()) for r in self.records]
def clear(self):
"""Reset the list of log records and the captured log text."""
self.handler.reset()
def set_level(self, level, logger=None):
"""Sets the level for capturing of logs. The level will be restored to its previous value at the end of
the test.
:param int level: the logger to level.
:param str logger: the logger to update the level. If not given, the root logger level is updated.
.. versionchanged:: 3.4
The levels of the loggers changed by this function will be restored to their initial values at the
end of the test.
"""
logger_name = logger
logger = logging.getLogger(logger_name)
# save the original log-level to restore it during teardown
self._initial_log_levels.setdefault(logger_name, logger.level)
logger.setLevel(level)
@contextmanager
def at_level(self, level, logger=None):
"""Context manager that sets the level for capturing of logs. After the end of the 'with' statement the
level is restored to its original value.
:param int level: the logger to level.
:param str logger: the logger to update the level. If not given, the root logger level is updated.
"""
logger = logging.getLogger(logger)
orig_level = logger.level
logger.setLevel(level)
try:
yield
finally:
logger.setLevel(orig_level)
@pytest.fixture
def caplog(request):
"""Access and control log capturing.
Captured logs are available through the following methods::
* caplog.text() -> string containing formatted log output
* caplog.records() -> list of logging.LogRecord instances
* caplog.record_tuples() -> list of (logger_name, level, message) tuples
* caplog.clear() -> clear captured records and formatted log output string
"""
result = LogCaptureFixture(request.node)
yield result
result._finalize()
def get_actual_log_level(config, *setting_names):
"""Return the actual logging level."""
for setting_name in setting_names:
log_level = config.getoption(setting_name)
if log_level is None:
log_level = config.getini(setting_name)
if log_level:
break
else:
return
if isinstance(log_level, six.string_types):
log_level = log_level.upper()
try:
return int(getattr(logging, log_level, log_level))
except ValueError:
# Python logging does not recognise this as a logging level
raise pytest.UsageError(
"'{0}' is not recognized as a logging level name for "
"'{1}'. Please consider passing the "
"logging level num instead.".format(
log_level,
setting_name))
def pytest_configure(config):
config.pluginmanager.register(LoggingPlugin(config), 'logging-plugin')
@contextmanager
def _dummy_context_manager():
yield
class LoggingPlugin(object):
"""Attaches to the logging module and captures log messages for each test.
"""
def __init__(self, config):
"""Creates a new plugin to capture log messages.
The formatter can be safely shared across all handlers so
create a single one for the entire test session here.
"""
self._config = config
# enable verbose output automatically if live logging is enabled
if self._log_cli_enabled() and not config.getoption('verbose'):
# sanity check: terminal reporter should not have been loaded at this point
assert self._config.pluginmanager.get_plugin('terminalreporter') is None
config.option.verbose = 1
self.print_logs = get_option_ini(config, 'log_print')
self.formatter = logging.Formatter(get_option_ini(config, 'log_format'),
get_option_ini(config, 'log_date_format'))
self.log_level = get_actual_log_level(config, 'log_level')
log_file = get_option_ini(config, 'log_file')
if log_file:
self.log_file_level = get_actual_log_level(config, 'log_file_level')
log_file_format = get_option_ini(config, 'log_file_format', 'log_format')
log_file_date_format = get_option_ini(config, 'log_file_date_format', 'log_date_format')
# Each pytest runtests session will write to a clean logfile
self.log_file_handler = logging.FileHandler(log_file, mode='w')
log_file_formatter = logging.Formatter(log_file_format, datefmt=log_file_date_format)
self.log_file_handler.setFormatter(log_file_formatter)
else:
self.log_file_handler = None
# initialized during pytest_runtestloop
self.log_cli_handler = None
def _log_cli_enabled(self):
"""Return True if log_cli should be considered enabled, either explicitly
or because --log-cli-level was given in the command-line.
"""
return self._config.getoption('--log-cli-level') is not None or \
self._config.getini('log_cli')
@contextmanager
def _runtest_for(self, item, when):
"""Implements the internals of pytest_runtest_xxx() hook."""
with catching_logs(LogCaptureHandler(),
formatter=self.formatter, level=self.log_level) as log_handler:
if self.log_cli_handler:
self.log_cli_handler.set_when(when)
if item is None:
yield # run the test
return
if not hasattr(item, 'catch_log_handlers'):
item.catch_log_handlers = {}
item.catch_log_handlers[when] = log_handler
item.catch_log_handler = log_handler
try:
yield # run test
finally:
del item.catch_log_handler
if when == 'teardown':
del item.catch_log_handlers
if self.print_logs:
# Add a captured log section to the report.
log = log_handler.stream.getvalue().strip()
item.add_report_section(when, 'log', log)
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_setup(self, item):
with self._runtest_for(item, 'setup'):
yield
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_call(self, item):
with self._runtest_for(item, 'call'):
yield
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_teardown(self, item):
with self._runtest_for(item, 'teardown'):
yield
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_logstart(self):
if self.log_cli_handler:
self.log_cli_handler.reset()
with self._runtest_for(None, 'start'):
yield
@pytest.hookimpl(hookwrapper=True)
def pytest_runtest_logfinish(self):
with self._runtest_for(None, 'finish'):
yield
@pytest.hookimpl(hookwrapper=True)
def pytest_runtestloop(self, session):
"""Runs all collected test items."""
self._setup_cli_logging()
with self.live_logs_context:
if self.log_file_handler is not None:
with closing(self.log_file_handler):
with catching_logs(self.log_file_handler,
level=self.log_file_level):
yield # run all the tests
else:
yield # run all the tests
def _setup_cli_logging(self):
"""Sets up the handler and logger for the Live Logs feature, if enabled.
This must be done right before starting the loop so we can access the terminal reporter plugin.
"""
terminal_reporter = self._config.pluginmanager.get_plugin('terminalreporter')
if self._log_cli_enabled() and terminal_reporter is not None:
capture_manager = self._config.pluginmanager.get_plugin('capturemanager')
log_cli_handler = _LiveLoggingStreamHandler(terminal_reporter, capture_manager)
log_cli_format = get_option_ini(self._config, 'log_cli_format', 'log_format')
log_cli_date_format = get_option_ini(self._config, 'log_cli_date_format', 'log_date_format')
if self._config.option.color != 'no' and ColoredLevelFormatter.LEVELNAME_FMT_REGEX.search(log_cli_format):
log_cli_formatter = ColoredLevelFormatter(create_terminal_writer(self._config),
log_cli_format, datefmt=log_cli_date_format)
else:
log_cli_formatter = logging.Formatter(log_cli_format, datefmt=log_cli_date_format)
log_cli_level = get_actual_log_level(self._config, 'log_cli_level', 'log_level')
self.log_cli_handler = log_cli_handler
self.live_logs_context = catching_logs(log_cli_handler, formatter=log_cli_formatter, level=log_cli_level)
else:
self.live_logs_context = _dummy_context_manager()
class _LiveLoggingStreamHandler(logging.StreamHandler):
"""
Custom StreamHandler used by the live logging feature: it will write a newline before the first log message
in each test.
During live logging we must also explicitly disable stdout/stderr capturing otherwise it will get captured
and won't appear in the terminal.
"""
def __init__(self, terminal_reporter, capture_manager):
"""
:param _pytest.terminal.TerminalReporter terminal_reporter:
:param _pytest.capture.CaptureManager capture_manager:
"""
logging.StreamHandler.__init__(self, stream=terminal_reporter)
self.capture_manager = capture_manager
self.reset()
self.set_when(None)
self._test_outcome_written = False
def reset(self):
"""Reset the handler; should be called before the start of each test"""
self._first_record_emitted = False
def set_when(self, when):
"""Prepares for the given test phase (setup/call/teardown)"""
self._when = when
self._section_name_shown = False
if when == 'start':
self._test_outcome_written = False
def emit(self, record):
if self.capture_manager is not None:
self.capture_manager.suspend_global_capture()
try:
if not self._first_record_emitted:
self.stream.write('\n')
self._first_record_emitted = True
elif self._when in ('teardown', 'finish'):
if not self._test_outcome_written:
self._test_outcome_written = True
self.stream.write('\n')
if not self._section_name_shown and self._when:
self.stream.section('live log ' + self._when, sep='-', bold=True)
self._section_name_shown = True
logging.StreamHandler.emit(self, record)
finally:
if self.capture_manager is not None:
self.capture_manager.resume_global_capture()

View File

@ -1,22 +1,22 @@
""" core implementation of testing process: init, session, runtest loop. """ """ core implementation of testing process: init, session, runtest loop. """
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import contextlib
import functools import functools
import os import os
import pkgutil
import six
import sys import sys
import _pytest import _pytest
from _pytest import nodes
import _pytest._code import _pytest._code
import py import py
try:
from collections import MutableMapping as MappingMixin
except ImportError:
from UserDict import DictMixin as MappingMixin
from _pytest.config import directory_arg, UsageError, hookimpl from _pytest.config import directory_arg, UsageError, hookimpl
from _pytest.runner import collect_one_node, exit from _pytest.outcomes import exit
from _pytest.runner import collect_one_node
tracebackcutdir = py.path.local(_pytest.__file__).dirpath()
# exitcodes for the command line # exitcodes for the command line
EXIT_OK = 0 EXIT_OK = 0
@ -30,13 +30,14 @@ EXIT_NOTESTSCOLLECTED = 5
def pytest_addoption(parser): def pytest_addoption(parser):
parser.addini("norecursedirs", "directory patterns to avoid for recursion", parser.addini("norecursedirs", "directory patterns to avoid for recursion",
type="args", default=['.*', 'build', 'dist', 'CVS', '_darcs', '{arch}', '*.egg', 'venv']) type="args", default=['.*', 'build', 'dist', 'CVS', '_darcs', '{arch}', '*.egg', 'venv'])
parser.addini("testpaths", "directories to search for tests when no files or directories are given in the command line.", parser.addini("testpaths", "directories to search for tests when no files or directories are given in the "
"command line.",
type="args", default=[]) type="args", default=[])
#parser.addini("dirpatterns", # parser.addini("dirpatterns",
# "patterns specifying possible locations of test files", # "patterns specifying possible locations of test files",
# type="linelist", default=["**/test_*.txt", # type="linelist", default=["**/test_*.txt",
# "**/test_*.py", "**/*_test.py"] # "**/test_*.py", "**/*_test.py"]
#) # )
group = parser.getgroup("general", "running and selection options") group = parser.getgroup("general", "running and selection options")
group._addoption('-x', '--exitfirst', action="store_const", group._addoption('-x', '--exitfirst', action="store_const",
dest="maxfail", const=1, dest="maxfail", const=1,
@ -45,12 +46,18 @@ def pytest_addoption(parser):
action="store", type=int, dest="maxfail", default=0, action="store", type=int, dest="maxfail", default=0,
help="exit after first num failures or errors.") help="exit after first num failures or errors.")
group._addoption('--strict', action="store_true", group._addoption('--strict', action="store_true",
help="run pytest in strict mode, warnings become errors.") help="marks not registered in configuration file raise errors.")
group._addoption("-c", metavar="file", type=str, dest="inifilename", group._addoption("-c", metavar="file", type=str, dest="inifilename",
help="load configuration from `file` instead of trying to locate one of the implicit configuration files.") help="load configuration from `file` instead of trying to locate one of the implicit "
"configuration files.")
group._addoption("--continue-on-collection-errors", action="store_true", group._addoption("--continue-on-collection-errors", action="store_true",
default=False, dest="continue_on_collection_errors", default=False, dest="continue_on_collection_errors",
help="Force test execution even if collection errors occur.") help="Force test execution even if collection errors occur.")
group._addoption("--rootdir", action="store",
dest="rootdir",
help="Define root directory for tests. Can be relative path: 'root_dir', './root_dir', "
"'root_dir/another_dir/'; absolute path: '/home/user/root_dir'; path with variables: "
"'$HOME/root_dir'.")
group = parser.getgroup("collect", "collection") group = parser.getgroup("collect", "collection")
group.addoption('--collectonly', '--collect-only', action="store_true", group.addoption('--collectonly', '--collect-only', action="store_true",
@ -59,6 +66,8 @@ def pytest_addoption(parser):
help="try to interpret all arguments as python packages.") help="try to interpret all arguments as python packages.")
group.addoption("--ignore", action="append", metavar="path", group.addoption("--ignore", action="append", metavar="path",
help="ignore path during collection (multi-allowed).") help="ignore path during collection (multi-allowed).")
group.addoption("--deselect", action="append", metavar="nodeid_prefix",
help="deselect item during collection (multi-allowed).")
# when changing this to --conf-cut-dir, config.py Conftest.setinitial # when changing this to --conf-cut-dir, config.py Conftest.setinitial
# needs upgrading as well # needs upgrading as well
group.addoption('--confcutdir', dest="confcutdir", default=None, group.addoption('--confcutdir', dest="confcutdir", default=None,
@ -70,6 +79,9 @@ def pytest_addoption(parser):
group.addoption('--keepduplicates', '--keep-duplicates', action="store_true", group.addoption('--keepduplicates', '--keep-duplicates', action="store_true",
dest="keepduplicates", default=False, dest="keepduplicates", default=False,
help="Keep duplicate tests.") help="Keep duplicate tests.")
group.addoption('--collect-in-virtualenv', action='store_true',
dest='collect_in_virtualenv', default=False,
help="Don't ignore tests in a local virtualenv directory")
group = parser.getgroup("debugconfig", group = parser.getgroup("debugconfig",
"test session debugging and configuration") "test session debugging and configuration")
@ -77,16 +89,6 @@ def pytest_addoption(parser):
help="base temporary directory for this test run.") help="base temporary directory for this test run.")
def pytest_namespace():
"""keeping this one works around a deeper startup issue in pytest
i tried to find it for a while but the amount of time turned unsustainable,
so i put a hack in to revisit later
"""
return {}
def pytest_configure(config): def pytest_configure(config):
__import__('pytest').config = config # compatibiltiy __import__('pytest').config = config # compatibiltiy
@ -105,6 +107,8 @@ def wrap_session(config, doit):
session.exitstatus = doit(config, session) or 0 session.exitstatus = doit(config, session) or 0
except UsageError: except UsageError:
raise raise
except Failed:
session.exitstatus = EXIT_TESTSFAILED
except KeyboardInterrupt: except KeyboardInterrupt:
excinfo = _pytest._code.ExceptionInfo() excinfo = _pytest._code.ExceptionInfo()
if initstate < 2 and isinstance(excinfo.value, exit.Exception): if initstate < 2 and isinstance(excinfo.value, exit.Exception):
@ -112,7 +116,7 @@ def wrap_session(config, doit):
excinfo.typename, excinfo.value.msg)) excinfo.typename, excinfo.value.msg))
config.hook.pytest_keyboard_interrupt(excinfo=excinfo) config.hook.pytest_keyboard_interrupt(excinfo=excinfo)
session.exitstatus = EXIT_INTERRUPTED session.exitstatus = EXIT_INTERRUPTED
except: except: # noqa
excinfo = _pytest._code.ExceptionInfo() excinfo = _pytest._code.ExceptionInfo()
config.notify_exception(excinfo, config.option) config.notify_exception(excinfo, config.option)
session.exitstatus = EXIT_INTERNALERROR session.exitstatus = EXIT_INTERNALERROR
@ -160,22 +164,38 @@ def pytest_runtestloop(session):
return True return True
for i, item in enumerate(session.items): for i, item in enumerate(session.items):
nextitem = session.items[i+1] if i+1 < len(session.items) else None nextitem = session.items[i + 1] if i + 1 < len(session.items) else None
item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem) item.config.hook.pytest_runtest_protocol(item=item, nextitem=nextitem)
if session.shouldfail:
raise session.Failed(session.shouldfail)
if session.shouldstop: if session.shouldstop:
raise session.Interrupted(session.shouldstop) raise session.Interrupted(session.shouldstop)
return True return True
def _in_venv(path):
"""Attempts to detect if ``path`` is the root of a Virtual Environment by
checking for the existence of the appropriate activate script"""
bindir = path.join('Scripts' if sys.platform.startswith('win') else 'bin')
if not bindir.isdir():
return False
activates = ('activate', 'activate.csh', 'activate.fish',
'Activate', 'Activate.bat', 'Activate.ps1')
return any([fname.basename in activates for fname in bindir.listdir()])
def pytest_ignore_collect(path, config): def pytest_ignore_collect(path, config):
p = path.dirpath() ignore_paths = config._getconftest_pathlist("collect_ignore", path=path.dirpath())
ignore_paths = config._getconftest_pathlist("collect_ignore", path=p)
ignore_paths = ignore_paths or [] ignore_paths = ignore_paths or []
excludeopt = config.getoption("ignore") excludeopt = config.getoption("ignore")
if excludeopt: if excludeopt:
ignore_paths.extend([py.path.local(x) for x in excludeopt]) ignore_paths.extend([py.path.local(x) for x in excludeopt])
if path in ignore_paths: if py.path.local(path) in ignore_paths:
return True
allow_in_venv = config.getoption("collect_in_virtualenv")
if _in_venv(path) and not allow_in_venv:
return True return True
# Skip duplicate paths. # Skip duplicate paths.
@ -190,7 +210,65 @@ def pytest_ignore_collect(path, config):
return False return False
class FSHookProxy: def pytest_collection_modifyitems(items, config):
deselect_prefixes = tuple(config.getoption("deselect") or [])
if not deselect_prefixes:
return
remaining = []
deselected = []
for colitem in items:
if colitem.nodeid.startswith(deselect_prefixes):
deselected.append(colitem)
else:
remaining.append(colitem)
if deselected:
config.hook.pytest_deselected(items=deselected)
items[:] = remaining
@contextlib.contextmanager
def _patched_find_module():
"""Patch bug in pkgutil.ImpImporter.find_module
When using pkgutil.find_loader on python<3.4 it removes symlinks
from the path due to a call to os.path.realpath. This is not consistent
with actually doing the import (in these versions, pkgutil and __import__
did not share the same underlying code). This can break conftest
discovery for pytest where symlinks are involved.
The only supported python<3.4 by pytest is python 2.7.
"""
if six.PY2: # python 3.4+ uses importlib instead
def find_module_patched(self, fullname, path=None):
# Note: we ignore 'path' argument since it is only used via meta_path
subname = fullname.split(".")[-1]
if subname != fullname and self.path is None:
return None
if self.path is None:
path = None
else:
# original: path = [os.path.realpath(self.path)]
path = [self.path]
try:
file, filename, etc = pkgutil.imp.find_module(subname,
path)
except ImportError:
return None
return pkgutil.ImpLoader(fullname, file, filename, etc)
old_find_module = pkgutil.ImpImporter.find_module
pkgutil.ImpImporter.find_module = find_module_patched
try:
yield
finally:
pkgutil.ImpImporter.find_module = old_find_module
else:
yield
class FSHookProxy(object):
def __init__(self, fspath, pm, remove_mods): def __init__(self, fspath, pm, remove_mods):
self.fspath = fspath self.fspath = fspath
self.pm = pm self.pm = pm
@ -201,373 +279,42 @@ class FSHookProxy:
self.__dict__[name] = x self.__dict__[name] = x
return x return x
class _CompatProperty(object):
def __init__(self, name):
self.name = name
def __get__(self, obj, owner):
if obj is None:
return self
# TODO: reenable in the features branch
# warnings.warn(
# "usage of {owner!r}.{name} is deprecated, please use pytest.{name} instead".format(
# name=self.name, owner=type(owner).__name__),
# PendingDeprecationWarning, stacklevel=2)
return getattr(__import__('pytest'), self.name)
class NodeKeywords(MappingMixin):
def __init__(self, node):
self.node = node
self.parent = node.parent
self._markers = {node.name: True}
def __getitem__(self, key):
try:
return self._markers[key]
except KeyError:
if self.parent is None:
raise
return self.parent.keywords[key]
def __setitem__(self, key, value):
self._markers[key] = value
def __delitem__(self, key):
raise ValueError("cannot delete key in keywords dict")
def __iter__(self):
seen = set(self._markers)
if self.parent is not None:
seen.update(self.parent.keywords)
return iter(seen)
def __len__(self):
return len(self.__iter__())
def keys(self):
return list(self)
def __repr__(self):
return "<NodeKeywords for node %s>" % (self.node, )
class Node(object):
""" base class for Collector and Item the test collection tree.
Collector subclasses have children, Items are terminal nodes."""
def __init__(self, name, parent=None, config=None, session=None):
#: a unique name within the scope of the parent node
self.name = name
#: the parent collector node.
self.parent = parent
#: the pytest config object
self.config = config or parent.config
#: the session this node is part of
self.session = session or parent.session
#: filesystem path where this node was collected from (can be None)
self.fspath = getattr(parent, 'fspath', None)
#: keywords/markers collected from all scopes
self.keywords = NodeKeywords(self)
#: allow adding of extra keywords to use for matching
self.extra_keyword_matches = set()
# used for storing artificial fixturedefs for direct parametrization
self._name2pseudofixturedef = {}
@property
def ihook(self):
""" fspath sensitive hook proxy used to call pytest hooks"""
return self.session.gethookproxy(self.fspath)
Module = _CompatProperty("Module")
Class = _CompatProperty("Class")
Instance = _CompatProperty("Instance")
Function = _CompatProperty("Function")
File = _CompatProperty("File")
Item = _CompatProperty("Item")
def _getcustomclass(self, name):
maybe_compatprop = getattr(type(self), name)
if isinstance(maybe_compatprop, _CompatProperty):
return getattr(__import__('pytest'), name)
else:
cls = getattr(self, name)
# TODO: reenable in the features branch
# warnings.warn("use of node.%s is deprecated, "
# "use pytest_pycollect_makeitem(...) to create custom "
# "collection nodes" % name, category=DeprecationWarning)
return cls
def __repr__(self):
return "<%s %r>" %(self.__class__.__name__,
getattr(self, 'name', None))
def warn(self, code, message):
""" generate a warning with the given code and message for this
item. """
assert isinstance(code, str)
fslocation = getattr(self, "location", None)
if fslocation is None:
fslocation = getattr(self, "fspath", None)
self.ihook.pytest_logwarning.call_historic(kwargs=dict(
code=code, message=message,
nodeid=self.nodeid, fslocation=fslocation))
# methods for ordering nodes
@property
def nodeid(self):
""" a ::-separated string denoting its collection tree address. """
try:
return self._nodeid
except AttributeError:
self._nodeid = x = self._makeid()
return x
def _makeid(self):
return self.parent.nodeid + "::" + self.name
def __hash__(self):
return hash(self.nodeid)
def setup(self):
pass
def teardown(self):
pass
def _memoizedcall(self, attrname, function):
exattrname = "_ex_" + attrname
failure = getattr(self, exattrname, None)
if failure is not None:
py.builtin._reraise(failure[0], failure[1], failure[2])
if hasattr(self, attrname):
return getattr(self, attrname)
try:
res = function()
except py.builtin._sysex:
raise
except:
failure = sys.exc_info()
setattr(self, exattrname, failure)
raise
setattr(self, attrname, res)
return res
def listchain(self):
""" return list of all parent collectors up to self,
starting from root of collection tree. """
chain = []
item = self
while item is not None:
chain.append(item)
item = item.parent
chain.reverse()
return chain
def add_marker(self, marker):
""" dynamically add a marker object to the node.
``marker`` can be a string or pytest.mark.* instance.
"""
from _pytest.mark import MarkDecorator, MARK_GEN
if isinstance(marker, py.builtin._basestring):
marker = getattr(MARK_GEN, marker)
elif not isinstance(marker, MarkDecorator):
raise ValueError("is not a string or pytest.mark.* Marker")
self.keywords[marker.name] = marker
def get_marker(self, name):
""" get a marker object from this node or None if
the node doesn't have a marker with that name. """
val = self.keywords.get(name, None)
if val is not None:
from _pytest.mark import MarkInfo, MarkDecorator
if isinstance(val, (MarkDecorator, MarkInfo)):
return val
def listextrakeywords(self):
""" Return a set of all extra keywords in self and any parents."""
extra_keywords = set()
item = self
for item in self.listchain():
extra_keywords.update(item.extra_keyword_matches)
return extra_keywords
def listnames(self):
return [x.name for x in self.listchain()]
def addfinalizer(self, fin):
""" register a function to be called when this node is finalized.
This method can only be called when this node is active
in a setup chain, for example during self.setup().
"""
self.session._setupstate.addfinalizer(fin, self)
def getparent(self, cls):
""" get the next parent node (including ourself)
which is an instance of the given class"""
current = self
while current and not isinstance(current, cls):
current = current.parent
return current
def _prunetraceback(self, excinfo):
pass
def _repr_failure_py(self, excinfo, style=None):
fm = self.session._fixturemanager
if excinfo.errisinstance(fm.FixtureLookupError):
return excinfo.value.formatrepr()
tbfilter = True
if self.config.option.fulltrace:
style="long"
else:
tb = _pytest._code.Traceback([excinfo.traceback[-1]])
self._prunetraceback(excinfo)
if len(excinfo.traceback) == 0:
excinfo.traceback = tb
tbfilter = False # prunetraceback already does it
if style == "auto":
style = "long"
# XXX should excinfo.getrepr record all data and toterminal() process it?
if style is None:
if self.config.option.tbstyle == "short":
style = "short"
else:
style = "long"
try:
os.getcwd()
abspath = False
except OSError:
abspath = True
return excinfo.getrepr(funcargs=True, abspath=abspath,
showlocals=self.config.option.showlocals,
style=style, tbfilter=tbfilter)
repr_failure = _repr_failure_py
class Collector(Node):
""" Collector instances create children through collect()
and thus iteratively build a tree.
"""
class CollectError(Exception):
""" an error during collection, contains a custom message. """
def collect(self):
""" returns a list of children (items and collectors)
for this collection node.
"""
raise NotImplementedError("abstract")
def repr_failure(self, excinfo):
""" represent a collection failure. """
if excinfo.errisinstance(self.CollectError):
exc = excinfo.value
return str(exc.args[0])
return self._repr_failure_py(excinfo, style="short")
def _prunetraceback(self, excinfo):
if hasattr(self, 'fspath'):
traceback = excinfo.traceback
ntraceback = traceback.cut(path=self.fspath)
if ntraceback == traceback:
ntraceback = ntraceback.cut(excludepath=tracebackcutdir)
excinfo.traceback = ntraceback.filter()
class FSCollector(Collector):
def __init__(self, fspath, parent=None, config=None, session=None):
fspath = py.path.local(fspath) # xxx only for test_resultlog.py?
name = fspath.basename
if parent is not None:
rel = fspath.relto(parent.fspath)
if rel:
name = rel
name = name.replace(os.sep, "/")
super(FSCollector, self).__init__(name, parent, config, session)
self.fspath = fspath
def _makeid(self):
relpath = self.fspath.relto(self.config.rootdir)
if os.sep != "/":
relpath = relpath.replace(os.sep, "/")
return relpath
class File(FSCollector):
""" base class for collecting tests from a file. """
class Item(Node):
""" a basic test invocation item. Note that for a single function
there might be multiple test invocation items.
"""
nextitem = None
def __init__(self, name, parent=None, config=None, session=None):
super(Item, self).__init__(name, parent, config, session)
self._report_sections = []
def add_report_section(self, when, key, content):
if content:
self._report_sections.append((when, key, content))
def reportinfo(self):
return self.fspath, None, ""
@property
def location(self):
try:
return self._location
except AttributeError:
location = self.reportinfo()
# bestrelpath is a quite slow function
cache = self.config.__dict__.setdefault("_bestrelpathcache", {})
try:
fspath = cache[location[0]]
except KeyError:
fspath = self.session.fspath.bestrelpath(location[0])
cache[location[0]] = fspath
location = (fspath, location[1], str(location[2]))
self._location = location
return location
class NoMatch(Exception): class NoMatch(Exception):
""" raised if matching cannot locate a matching names. """ """ raised if matching cannot locate a matching names. """
class Interrupted(KeyboardInterrupt): class Interrupted(KeyboardInterrupt):
""" signals an interrupted test run. """ """ signals an interrupted test run. """
__module__ = 'builtins' # for py3 __module__ = 'builtins' # for py3
class Session(FSCollector):
class Failed(Exception):
""" signals an stop as failed test run. """
class Session(nodes.FSCollector):
Interrupted = Interrupted Interrupted = Interrupted
Failed = Failed
def __init__(self, config): def __init__(self, config):
FSCollector.__init__(self, config.rootdir, parent=None, nodes.FSCollector.__init__(
config=config, session=self) self, config.rootdir, parent=None,
config=config, session=self, nodeid="")
self.testsfailed = 0 self.testsfailed = 0
self.testscollected = 0 self.testscollected = 0
self.shouldstop = False self.shouldstop = False
self.shouldfail = False
self.trace = config.trace.root.get("collection") self.trace = config.trace.root.get("collection")
self._norecursepatterns = config.getini("norecursedirs") self._norecursepatterns = config.getini("norecursedirs")
self.startdir = py.path.local() self.startdir = py.path.local()
self.config.pluginmanager.register(self, name="session")
def _makeid(self): self.config.pluginmanager.register(self, name="session")
return ""
@hookimpl(tryfirst=True) @hookimpl(tryfirst=True)
def pytest_collectstart(self): def pytest_collectstart(self):
if self.shouldfail:
raise self.Failed(self.shouldfail)
if self.shouldstop: if self.shouldstop:
raise self.Interrupted(self.shouldstop) raise self.Interrupted(self.shouldstop)
@ -577,7 +324,7 @@ class Session(FSCollector):
self.testsfailed += 1 self.testsfailed += 1
maxfail = self.config.getvalue("maxfail") maxfail = self.config.getvalue("maxfail")
if maxfail and self.testsfailed >= maxfail: if maxfail and self.testsfailed >= maxfail:
self.shouldstop = "stopping after %d failures" % ( self.shouldfail = "stopping after %d failures" % (
self.testsfailed) self.testsfailed)
pytest_collectreport = pytest_runtest_logreport pytest_collectreport = pytest_runtest_logreport
@ -692,8 +439,9 @@ class Session(FSCollector):
"""Convert a dotted module name to path. """Convert a dotted module name to path.
""" """
import pkgutil
try: try:
with _patched_find_module():
loader = pkgutil.find_loader(x) loader = pkgutil.find_loader(x)
except ImportError: except ImportError:
return x return x
@ -702,6 +450,7 @@ class Session(FSCollector):
# This method is sometimes invoked when AssertionRewritingHook, which # This method is sometimes invoked when AssertionRewritingHook, which
# does not define a get_filename method, is already in place: # does not define a get_filename method, is already in place:
try: try:
with _patched_find_module():
path = loader.get_filename(x) path = loader.get_filename(x)
except AttributeError: except AttributeError:
# Retrieve path from AssertionRewritingHook: # Retrieve path from AssertionRewritingHook:
@ -746,11 +495,11 @@ class Session(FSCollector):
nextnames = names[1:] nextnames = names[1:]
resultnodes = [] resultnodes = []
for node in matching: for node in matching:
if isinstance(node, Item): if isinstance(node, nodes.Item):
if not names: if not names:
resultnodes.append(node) resultnodes.append(node)
continue continue
assert isinstance(node, Collector) assert isinstance(node, nodes.Collector)
rep = collect_one_node(node) rep = collect_one_node(node)
if rep.passed: if rep.passed:
has_matched = False has_matched = False
@ -772,11 +521,11 @@ class Session(FSCollector):
def genitems(self, node): def genitems(self, node):
self.trace("genitems", node) self.trace("genitems", node)
if isinstance(node, Item): if isinstance(node, nodes.Item):
node.ihook.pytest_itemcollected(item=node) node.ihook.pytest_itemcollected(item=node)
yield node yield node
else: else:
assert isinstance(node, Collector) assert isinstance(node, nodes.Collector)
rep = collect_one_node(node) rep = collect_one_node(node)
if rep.passed: if rep.passed:
for subnode in rep.result: for subnode in rep.result:

157
_pytest/mark/__init__.py Normal file
View File

@ -0,0 +1,157 @@
""" generic mechanism for marking and selecting python functions. """
from __future__ import absolute_import, division, print_function
from _pytest.config import UsageError
from .structures import (
ParameterSet, EMPTY_PARAMETERSET_OPTION, MARK_GEN,
Mark, MarkInfo, MarkDecorator, MarkGenerator,
transfer_markers, get_empty_parameterset_mark
)
from .legacy import matchkeyword, matchmark
__all__ = [
'Mark', 'MarkInfo', 'MarkDecorator', 'MarkGenerator',
'transfer_markers', 'get_empty_parameterset_mark'
]
class MarkerError(Exception):
"""Error in use of a pytest marker/attribute."""
def param(*values, **kw):
"""Specify a parameter in `pytest.mark.parametrize`_ calls or
:ref:`parametrized fixtures <fixture-parametrize-marks>`.
.. code-block:: python
@pytest.mark.parametrize("test_input,expected", [
("3+5", 8),
pytest.param("6*9", 42, marks=pytest.mark.xfail),
])
def test_eval(test_input, expected):
assert eval(test_input) == expected
:param values: variable args of the values of the parameter set, in order.
:keyword marks: a single mark or a list of marks to be applied to this parameter set.
:keyword str id: the id to attribute to this parameter set.
"""
return ParameterSet.param(*values, **kw)
def pytest_addoption(parser):
group = parser.getgroup("general")
group._addoption(
'-k',
action="store", dest="keyword", default='', metavar="EXPRESSION",
help="only run tests which match the given substring expression. "
"An expression is a python evaluatable expression "
"where all names are substring-matched against test names "
"and their parent classes. Example: -k 'test_method or test_"
"other' matches all test functions and classes whose name "
"contains 'test_method' or 'test_other', while -k 'not test_method' "
"matches those that don't contain 'test_method' in their names. "
"Additionally keywords are matched to classes and functions "
"containing extra names in their 'extra_keyword_matches' set, "
"as well as functions which have names assigned directly to them."
)
group._addoption(
"-m",
action="store", dest="markexpr", default="", metavar="MARKEXPR",
help="only run tests matching given mark expression. "
"example: -m 'mark1 and not mark2'."
)
group.addoption(
"--markers", action="store_true",
help="show markers (builtin, plugin and per-project ones)."
)
parser.addini("markers", "markers for test functions", 'linelist')
parser.addini(
EMPTY_PARAMETERSET_OPTION,
"default marker for empty parametersets")
def pytest_cmdline_main(config):
import _pytest.config
if config.option.markers:
config._do_configure()
tw = _pytest.config.create_terminal_writer(config)
for line in config.getini("markers"):
parts = line.split(":", 1)
name = parts[0]
rest = parts[1] if len(parts) == 2 else ''
tw.write("@pytest.mark.%s:" % name, bold=True)
tw.line(rest)
tw.line()
config._ensure_unconfigure()
return 0
pytest_cmdline_main.tryfirst = True
def deselect_by_keyword(items, config):
keywordexpr = config.option.keyword.lstrip()
if keywordexpr.startswith("-"):
keywordexpr = "not " + keywordexpr[1:]
selectuntil = False
if keywordexpr[-1:] == ":":
selectuntil = True
keywordexpr = keywordexpr[:-1]
remaining = []
deselected = []
for colitem in items:
if keywordexpr and not matchkeyword(colitem, keywordexpr):
deselected.append(colitem)
else:
if selectuntil:
keywordexpr = None
remaining.append(colitem)
if deselected:
config.hook.pytest_deselected(items=deselected)
items[:] = remaining
def deselect_by_mark(items, config):
matchexpr = config.option.markexpr
if not matchexpr:
return
remaining = []
deselected = []
for item in items:
if matchmark(item, matchexpr):
remaining.append(item)
else:
deselected.append(item)
if deselected:
config.hook.pytest_deselected(items=deselected)
items[:] = remaining
def pytest_collection_modifyitems(items, config):
deselect_by_keyword(items, config)
deselect_by_mark(items, config)
def pytest_configure(config):
config._old_mark_config = MARK_GEN._config
if config.option.strict:
MARK_GEN._config = config
empty_parameterset = config.getini(EMPTY_PARAMETERSET_OPTION)
if empty_parameterset not in ('skip', 'xfail', None, ''):
raise UsageError(
"{!s} must be one of skip and xfail,"
" but it is {!r}".format(EMPTY_PARAMETERSET_OPTION, empty_parameterset))
def pytest_unconfigure(config):
MARK_GEN._config = getattr(config, '_old_mark_config', None)

118
_pytest/mark/evaluate.py Normal file
View File

@ -0,0 +1,118 @@
import os
import six
import sys
import platform
import traceback
from ..outcomes import fail, TEST_OUTCOME
def cached_eval(config, expr, d):
if not hasattr(config, '_evalcache'):
config._evalcache = {}
try:
return config._evalcache[expr]
except KeyError:
import _pytest._code
exprcode = _pytest._code.compile(expr, mode="eval")
config._evalcache[expr] = x = eval(exprcode, d)
return x
class MarkEvaluator(object):
def __init__(self, item, name):
self.item = item
self._marks = None
self._mark = None
self._mark_name = name
def __bool__(self):
# dont cache here to prevent staleness
return bool(self._get_marks())
__nonzero__ = __bool__
def wasvalid(self):
return not hasattr(self, 'exc')
def _get_marks(self):
return [x for x in self.item.iter_markers() if x.name == self._mark_name]
def invalidraise(self, exc):
raises = self.get('raises')
if not raises:
return
return not isinstance(exc, raises)
def istrue(self):
try:
return self._istrue()
except TEST_OUTCOME:
self.exc = sys.exc_info()
if isinstance(self.exc[1], SyntaxError):
msg = [" " * (self.exc[1].offset + 4) + "^", ]
msg.append("SyntaxError: invalid syntax")
else:
msg = traceback.format_exception_only(*self.exc[:2])
fail("Error evaluating %r expression\n"
" %s\n"
"%s"
% (self._mark_name, self.expr, "\n".join(msg)),
pytrace=False)
def _getglobals(self):
d = {'os': os, 'sys': sys, 'platform': platform, 'config': self.item.config}
if hasattr(self.item, 'obj'):
d.update(self.item.obj.__globals__)
return d
def _istrue(self):
if hasattr(self, 'result'):
return self.result
self._marks = self._get_marks()
if self._marks:
self.result = False
for mark in self._marks:
self._mark = mark
if 'condition' in mark.kwargs:
args = (mark.kwargs['condition'],)
else:
args = mark.args
for expr in args:
self.expr = expr
if isinstance(expr, six.string_types):
d = self._getglobals()
result = cached_eval(self.item.config, expr, d)
else:
if "reason" not in mark.kwargs:
# XXX better be checked at collection time
msg = "you need to specify reason=STRING " \
"when using booleans as conditions."
fail(msg)
result = bool(expr)
if result:
self.result = True
self.reason = mark.kwargs.get('reason', None)
self.expr = expr
return self.result
if not args:
self.result = True
self.reason = mark.kwargs.get('reason', None)
return self.result
return False
def get(self, attr, default=None):
if self._mark is None:
return default
return self._mark.kwargs.get(attr, default)
def getexplanation(self):
expl = getattr(self, 'reason', None) or self.get('reason', None)
if not expl:
if not hasattr(self, 'expr'):
return ""
else:
return "condition: " + str(self.expr)
return expl

97
_pytest/mark/legacy.py Normal file
View File

@ -0,0 +1,97 @@
"""
this is a place where we put datastructures used by legacy apis
we hope ot remove
"""
import attr
import keyword
from . import MarkInfo, MarkDecorator
from _pytest.config import UsageError
@attr.s
class MarkMapping(object):
"""Provides a local mapping for markers where item access
resolves to True if the marker is present. """
own_mark_names = attr.ib()
@classmethod
def from_keywords(cls, keywords):
mark_names = set()
for key, value in keywords.items():
if isinstance(value, MarkInfo) or isinstance(value, MarkDecorator):
mark_names.add(key)
return cls(mark_names)
def __getitem__(self, name):
return name in self.own_mark_names
class KeywordMapping(object):
"""Provides a local mapping for keywords.
Given a list of names, map any substring of one of these names to True.
"""
def __init__(self, names):
self._names = names
@classmethod
def from_item(cls, item):
mapped_names = set()
# Add the names of the current item and any parent items
import pytest
for item in item.listchain():
if not isinstance(item, pytest.Instance):
mapped_names.add(item.name)
# Add the names added as extra keywords to current or parent items
for name in item.listextrakeywords():
mapped_names.add(name)
# Add the names attached to the current function through direct assignment
if hasattr(item, 'function'):
for name in item.function.__dict__:
mapped_names.add(name)
return cls(mapped_names)
def __getitem__(self, subname):
for name in self._names:
if subname in name:
return True
return False
python_keywords_allowed_list = ["or", "and", "not"]
def matchmark(colitem, markexpr):
"""Tries to match on any marker names, attached to the given colitem."""
return eval(markexpr, {}, MarkMapping.from_keywords(colitem.keywords))
def matchkeyword(colitem, keywordexpr):
"""Tries to match given keyword expression to given collector item.
Will match on the name of colitem, including the names of its parents.
Only matches names of items which are either a :class:`Class` or a
:class:`Function`.
Additionally, matches on names in the 'extra_keyword_matches' set of
any item, as well as names directly assigned to test functions.
"""
mapping = KeywordMapping.from_item(colitem)
if " " not in keywordexpr:
# special case to allow for simple "-k pass" and "-k 1.3"
return mapping[keywordexpr]
elif keywordexpr.startswith("not ") and " " not in keywordexpr[4:]:
return not mapping[keywordexpr[4:]]
for kwd in keywordexpr.split():
if keyword.iskeyword(kwd) and kwd not in python_keywords_allowed_list:
raise UsageError("Python keyword '{}' not accepted in expressions passed to '-k'".format(kwd))
try:
return eval(keywordexpr, {}, mapping)
except SyntaxError:
raise UsageError("Wrong expression passed to '-k': {}".format(keywordexpr))

View File

@ -1,12 +1,17 @@
""" generic mechanism for marking and selecting python functions. """
from __future__ import absolute_import, division, print_function
import inspect import inspect
import warnings import warnings
from collections import namedtuple from collections import namedtuple
from operator import attrgetter from operator import attrgetter
from .compat import imap
from .deprecated import MARK_INFO_ATTRIBUTE, MARK_PARAMETERSET_UNPACKING import attr
from ..deprecated import MARK_PARAMETERSET_UNPACKING, MARK_INFO_ATTRIBUTE
from ..compat import NOTSET, getfslineno, MappingMixin
from six.moves import map, reduce
EMPTY_PARAMETERSET_OPTION = "empty_parameter_set_mark"
def alias(name, warning=None): def alias(name, warning=None):
getter = attrgetter(name) getter = attrgetter(name)
@ -18,6 +23,25 @@ def alias(name, warning=None):
return property(getter if warning is None else warned, doc='alias for ' + name) return property(getter if warning is None else warned, doc='alias for ' + name)
def istestfunc(func):
return hasattr(func, "__call__") and \
getattr(func, "__name__", "<lambda>") != "<lambda>"
def get_empty_parameterset_mark(config, argnames, func):
requested_mark = config.getini(EMPTY_PARAMETERSET_OPTION)
if requested_mark in ('', None, 'skip'):
mark = MARK_GEN.skip
elif requested_mark == 'xfail':
mark = MARK_GEN.xfail(run=False)
else:
raise LookupError(requested_mark)
fs, lineno = getfslineno(func)
reason = "got empty parameter set %r, function %s at %s:%d" % (
argnames, func.__name__, fs, lineno)
return mark(reason=reason)
class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')): class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
@classmethod @classmethod
def param(cls, *values, **kw): def param(cls, *values, **kw):
@ -30,8 +54,8 @@ class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
def param_extract_id(id=None): def param_extract_id(id=None):
return id return id
id = param_extract_id(**kw) id_ = param_extract_id(**kw)
return cls(values, marks, id) return cls(values, marks, id_)
@classmethod @classmethod
def extract_from(cls, parameterset, legacy_force_tuple=False): def extract_from(cls, parameterset, legacy_force_tuple=False):
@ -66,221 +90,53 @@ class ParameterSet(namedtuple('ParameterSet', 'values, marks, id')):
return cls(argval, marks=newmarks, id=None) return cls(argval, marks=newmarks, id=None)
@property @classmethod
def deprecated_arg_dict(self): def _for_parametrize(cls, argnames, argvalues, func, config):
return dict((mark.name, mark) for mark in self.marks) if not isinstance(argnames, (tuple, list)):
argnames = [x.strip() for x in argnames.split(",") if x.strip()]
force_tuple = len(argnames) == 1
class MarkerError(Exception):
"""Error in use of a pytest marker/attribute."""
def param(*values, **kw):
return ParameterSet.param(*values, **kw)
def pytest_addoption(parser):
group = parser.getgroup("general")
group._addoption(
'-k',
action="store", dest="keyword", default='', metavar="EXPRESSION",
help="only run tests which match the given substring expression. "
"An expression is a python evaluatable expression "
"where all names are substring-matched against test names "
"and their parent classes. Example: -k 'test_method or test_"
"other' matches all test functions and classes whose name "
"contains 'test_method' or 'test_other'. "
"Additionally keywords are matched to classes and functions "
"containing extra names in their 'extra_keyword_matches' set, "
"as well as functions which have names assigned directly to them."
)
group._addoption(
"-m",
action="store", dest="markexpr", default="", metavar="MARKEXPR",
help="only run tests matching given mark expression. "
"example: -m 'mark1 and not mark2'."
)
group.addoption(
"--markers", action="store_true",
help="show markers (builtin, plugin and per-project ones)."
)
parser.addini("markers", "markers for test functions", 'linelist')
def pytest_cmdline_main(config):
import _pytest.config
if config.option.markers:
config._do_configure()
tw = _pytest.config.create_terminal_writer(config)
for line in config.getini("markers"):
name, rest = line.split(":", 1)
tw.write("@pytest.mark.%s:" % name, bold=True)
tw.line(rest)
tw.line()
config._ensure_unconfigure()
return 0
pytest_cmdline_main.tryfirst = True
def pytest_collection_modifyitems(items, config):
keywordexpr = config.option.keyword.lstrip()
matchexpr = config.option.markexpr
if not keywordexpr and not matchexpr:
return
# pytest used to allow "-" for negating
# but today we just allow "-" at the beginning, use "not" instead
# we probably remove "-" altogether soon
if keywordexpr.startswith("-"):
keywordexpr = "not " + keywordexpr[1:]
selectuntil = False
if keywordexpr[-1:] == ":":
selectuntil = True
keywordexpr = keywordexpr[:-1]
remaining = []
deselected = []
for colitem in items:
if keywordexpr and not matchkeyword(colitem, keywordexpr):
deselected.append(colitem)
else: else:
if selectuntil: force_tuple = False
keywordexpr = None parameters = [
if matchexpr: ParameterSet.extract_from(x, legacy_force_tuple=force_tuple)
if not matchmark(colitem, matchexpr): for x in argvalues]
deselected.append(colitem) del argvalues
continue
remaining.append(colitem)
if deselected: if not parameters:
config.hook.pytest_deselected(items=deselected) mark = get_empty_parameterset_mark(config, argnames, func)
items[:] = remaining parameters.append(ParameterSet(
values=(NOTSET,) * len(argnames),
marks=[mark],
id=None,
))
return argnames, parameters
class MarkMapping: @attr.s(frozen=True)
"""Provides a local mapping for markers where item access class Mark(object):
resolves to True if the marker is present. """ #: name of the mark
def __init__(self, keywords): name = attr.ib(type=str)
mymarks = set() #: positional arguments of the mark decorator
for key, value in keywords.items(): args = attr.ib(type="List[object]")
if isinstance(value, MarkInfo) or isinstance(value, MarkDecorator): #: keyword arguments of the mark decorator
mymarks.add(key) kwargs = attr.ib(type="Dict[str, object]")
self._mymarks = mymarks
def __getitem__(self, name): def combined_with(self, other):
return name in self._mymarks
class KeywordMapping:
"""Provides a local mapping for keywords.
Given a list of names, map any substring of one of these names to True.
""" """
def __init__(self, names): :param other: the mark to combine with
self._names = names :type other: Mark
:rtype: Mark
def __getitem__(self, subname): combines by appending aargs and merging the mappings
for name in self._names:
if subname in name:
return True
return False
def matchmark(colitem, markexpr):
"""Tries to match on any marker names, attached to the given colitem."""
return eval(markexpr, {}, MarkMapping(colitem.keywords))
def matchkeyword(colitem, keywordexpr):
"""Tries to match given keyword expression to given collector item.
Will match on the name of colitem, including the names of its parents.
Only matches names of items which are either a :class:`Class` or a
:class:`Function`.
Additionally, matches on names in the 'extra_keyword_matches' set of
any item, as well as names directly assigned to test functions.
""" """
mapped_names = set() assert self.name == other.name
return Mark(
# Add the names of the current item and any parent items self.name, self.args + other.args,
import pytest dict(self.kwargs, **other.kwargs))
for item in colitem.listchain():
if not isinstance(item, pytest.Instance):
mapped_names.add(item.name)
# Add the names added as extra keywords to current or parent items
for name in colitem.listextrakeywords():
mapped_names.add(name)
# Add the names attached to the current function through direct assignment
if hasattr(colitem, 'function'):
for name in colitem.function.__dict__:
mapped_names.add(name)
mapping = KeywordMapping(mapped_names)
if " " not in keywordexpr:
# special case to allow for simple "-k pass" and "-k 1.3"
return mapping[keywordexpr]
elif keywordexpr.startswith("not ") and " " not in keywordexpr[4:]:
return not mapping[keywordexpr[4:]]
return eval(keywordexpr, {}, mapping)
def pytest_configure(config): @attr.s
config._old_mark_config = MARK_GEN._config class MarkDecorator(object):
if config.option.strict:
MARK_GEN._config = config
def pytest_unconfigure(config):
MARK_GEN._config = getattr(config, '_old_mark_config', None)
class MarkGenerator:
""" Factory for :class:`MarkDecorator` objects - exposed as
a ``pytest.mark`` singleton instance. Example::
import pytest
@pytest.mark.slowtest
def test_function():
pass
will set a 'slowtest' :class:`MarkInfo` object
on the ``test_function`` object. """
_config = None
def __getattr__(self, name):
if name[0] == "_":
raise AttributeError("Marker name must NOT start with underscore")
if self._config is not None:
self._check(name)
return MarkDecorator(Mark(name, (), {}))
def _check(self, name):
try:
if name in self._markers:
return
except AttributeError:
pass
self._markers = l = set()
for line in self._config.getini("markers"):
beginning = line.split(":", 1)
x = beginning[0].split("(", 1)[0]
l.add(x)
if name not in self._markers:
raise AttributeError("%r not a registered marker" % (name,))
def istestfunc(func):
return hasattr(func, "__call__") and \
getattr(func, "__name__", "<lambda>") != "<lambda>"
class MarkDecorator:
""" A decorator for test functions and test classes. When applied """ A decorator for test functions and test classes. When applied
it will create :class:`MarkInfo` objects which may be it will create :class:`MarkInfo` objects which may be
:ref:`retrieved by hooks as item keywords <excontrolskip>`. :ref:`retrieved by hooks as item keywords <excontrolskip>`.
@ -313,9 +169,8 @@ class MarkDecorator:
additional keyword or positional arguments. additional keyword or positional arguments.
""" """
def __init__(self, mark):
assert isinstance(mark, Mark), repr(mark) mark = attr.ib(validator=attr.validators.instance_of(Mark))
self.mark = mark
name = alias('mark.name') name = alias('mark.name')
args = alias('mark.args') args = alias('mark.args')
@ -326,11 +181,22 @@ class MarkDecorator:
return self.name # for backward-compat (2.4.1 had this attr) return self.name # for backward-compat (2.4.1 had this attr)
def __eq__(self, other): def __eq__(self, other):
return self.mark == other.mark return self.mark == other.mark if isinstance(other, MarkDecorator) else False
def __repr__(self): def __repr__(self):
return "<MarkDecorator %r>" % (self.mark,) return "<MarkDecorator %r>" % (self.mark,)
def with_args(self, *args, **kwargs):
""" return a MarkDecorator with extra arguments added
unlike call this can be used even if the sole argument is a callable/class
:return: MarkDecorator
"""
mark = Mark(self.name, args, kwargs)
return self.__class__(self.mark.combined_with(mark))
def __call__(self, *args, **kwargs): def __call__(self, *args, **kwargs):
""" if passed a single callable argument: decorate it with mark info. """ if passed a single callable argument: decorate it with mark info.
otherwise add *args/**kwargs in-place to mark information. """ otherwise add *args/**kwargs in-place to mark information. """
@ -344,9 +210,8 @@ class MarkDecorator:
store_legacy_markinfo(func, self.mark) store_legacy_markinfo(func, self.mark)
store_mark(func, self.mark) store_mark(func, self.mark)
return func return func
return self.with_args(*args, **kwargs)
mark = Mark(self.name, args, kwargs)
return self.__class__(self.mark.combined_with(mark))
def get_unpacked_marks(obj): def get_unpacked_marks(obj):
""" """
@ -368,7 +233,7 @@ def store_mark(obj, mark):
""" """
assert isinstance(mark, Mark), mark assert isinstance(mark, Mark), mark
# always reassign name to avoid updating pytestmark # always reassign name to avoid updating pytestmark
# in a referene that was only borrowed # in a reference that was only borrowed
obj.pytestmark = get_unpacked_marks(obj) + [mark] obj.pytestmark = get_unpacked_marks(obj) + [mark]
@ -379,60 +244,12 @@ def store_legacy_markinfo(func, mark):
raise TypeError("got {mark!r} instead of a Mark".format(mark=mark)) raise TypeError("got {mark!r} instead of a Mark".format(mark=mark))
holder = getattr(func, mark.name, None) holder = getattr(func, mark.name, None)
if holder is None: if holder is None:
holder = MarkInfo(mark) holder = MarkInfo.for_mark(mark)
setattr(func, mark.name, holder) setattr(func, mark.name, holder)
else: else:
holder.add_mark(mark) holder.add_mark(mark)
class Mark(namedtuple('Mark', 'name, args, kwargs')):
def combined_with(self, other):
assert self.name == other.name
return Mark(
self.name, self.args + other.args,
dict(self.kwargs, **other.kwargs))
class MarkInfo(object):
""" Marking object created by :class:`MarkDecorator` instances. """
def __init__(self, mark):
assert isinstance(mark, Mark), repr(mark)
self.combined = mark
self._marks = [mark]
name = alias('combined.name', warning=MARK_INFO_ATTRIBUTE)
args = alias('combined.args', warning=MARK_INFO_ATTRIBUTE)
kwargs = alias('combined.kwargs', warning=MARK_INFO_ATTRIBUTE)
def __repr__(self):
return "<MarkInfo {0!r}>".format(self.combined)
def add_mark(self, mark):
""" add a MarkInfo with the given args and kwargs. """
self._marks.append(mark)
self.combined = self.combined.combined_with(mark)
def __iter__(self):
""" yield MarkInfo objects each relating to a marking-call. """
return imap(MarkInfo, self._marks)
MARK_GEN = MarkGenerator()
def _marked(func, mark):
""" Returns True if :func: is already marked with :mark:, False otherwise.
This can happen if marker is applied to class and the test file is
invoked more than once.
"""
try:
func_mark = getattr(func, mark.name)
except AttributeError:
return False
return mark.args == func_mark.args and mark.kwargs == func_mark.kwargs
def transfer_markers(funcobj, cls, mod): def transfer_markers(funcobj, cls, mod):
""" """
this function transfers class level markers and module level markers this function transfers class level markers and module level markers
@ -446,3 +263,152 @@ def transfer_markers(funcobj, cls, mod):
for mark in get_unpacked_marks(obj): for mark in get_unpacked_marks(obj):
if not _marked(funcobj, mark): if not _marked(funcobj, mark):
store_legacy_markinfo(funcobj, mark) store_legacy_markinfo(funcobj, mark)
def _marked(func, mark):
""" Returns True if :func: is already marked with :mark:, False otherwise.
This can happen if marker is applied to class and the test file is
invoked more than once.
"""
try:
func_mark = getattr(func, getattr(mark, 'combined', mark).name)
except AttributeError:
return False
return any(mark == info.combined for info in func_mark)
@attr.s
class MarkInfo(object):
""" Marking object created by :class:`MarkDecorator` instances. """
_marks = attr.ib()
combined = attr.ib(
repr=False,
default=attr.Factory(lambda self: reduce(Mark.combined_with, self._marks),
takes_self=True))
name = alias('combined.name', warning=MARK_INFO_ATTRIBUTE)
args = alias('combined.args', warning=MARK_INFO_ATTRIBUTE)
kwargs = alias('combined.kwargs', warning=MARK_INFO_ATTRIBUTE)
@classmethod
def for_mark(cls, mark):
return cls([mark])
def __repr__(self):
return "<MarkInfo {0!r}>".format(self.combined)
def add_mark(self, mark):
""" add a MarkInfo with the given args and kwargs. """
self._marks.append(mark)
self.combined = self.combined.combined_with(mark)
def __iter__(self):
""" yield MarkInfo objects each relating to a marking-call. """
return map(MarkInfo.for_mark, self._marks)
class MarkGenerator(object):
""" Factory for :class:`MarkDecorator` objects - exposed as
a ``pytest.mark`` singleton instance. Example::
import pytest
@pytest.mark.slowtest
def test_function():
pass
will set a 'slowtest' :class:`MarkInfo` object
on the ``test_function`` object. """
_config = None
def __getattr__(self, name):
if name[0] == "_":
raise AttributeError("Marker name must NOT start with underscore")
if self._config is not None:
self._check(name)
return MarkDecorator(Mark(name, (), {}))
def _check(self, name):
try:
if name in self._markers:
return
except AttributeError:
pass
self._markers = values = set()
for line in self._config.getini("markers"):
marker = line.split(":", 1)[0]
marker = marker.rstrip()
x = marker.split("(", 1)[0]
values.add(x)
if name not in self._markers:
raise AttributeError("%r not a registered marker" % (name,))
MARK_GEN = MarkGenerator()
class NodeKeywords(MappingMixin):
def __init__(self, node):
self.node = node
self.parent = node.parent
self._markers = {node.name: True}
def __getitem__(self, key):
try:
return self._markers[key]
except KeyError:
if self.parent is None:
raise
return self.parent.keywords[key]
def __setitem__(self, key, value):
self._markers[key] = value
def __delitem__(self, key):
raise ValueError("cannot delete key in keywords dict")
def __iter__(self):
seen = self._seen()
return iter(seen)
def _seen(self):
seen = set(self._markers)
if self.parent is not None:
seen.update(self.parent.keywords)
return seen
def __len__(self):
return len(self._seen())
def __repr__(self):
return "<NodeKeywords for node %s>" % (self.node, )
@attr.s(cmp=False, hash=False)
class NodeMarkers(object):
"""
internal strucutre for storing marks belongong to a node
..warning::
unstable api
"""
own_markers = attr.ib(default=attr.Factory(list))
def update(self, add_markers):
"""update the own markers
"""
self.own_markers.extend(add_markers)
def find(self, name):
"""
find markers in own nodes or parent nodes
needs a better place
"""
for mark in self.own_markers:
if mark.name == name:
yield mark
def __iter__(self):
return iter(self.own_markers)

View File

@ -4,8 +4,9 @@ from __future__ import absolute_import, division, print_function
import os import os
import sys import sys
import re import re
from contextlib import contextmanager
from py.builtin import _basestring import six
from _pytest.fixtures import fixture from _pytest.fixtures import fixture
RE_IMPORT_ERROR_NAME = re.compile("^No module named (.*)$") RE_IMPORT_ERROR_NAME = re.compile("^No module named (.*)$")
@ -79,7 +80,7 @@ def annotated_getattr(obj, name, ann):
def derive_importpath(import_path, raising): def derive_importpath(import_path, raising):
if not isinstance(import_path, _basestring) or "." not in import_path: if not isinstance(import_path, six.string_types) or "." not in import_path:
raise TypeError("must be absolute import path string, not %r" % raise TypeError("must be absolute import path string, not %r" %
(import_path,)) (import_path,))
module, attr = import_path.rsplit('.', 1) module, attr = import_path.rsplit('.', 1)
@ -89,7 +90,7 @@ def derive_importpath(import_path, raising):
return attr, target return attr, target
class Notset: class Notset(object):
def __repr__(self): def __repr__(self):
return "<notset>" return "<notset>"
@ -97,7 +98,7 @@ class Notset:
notset = Notset() notset = Notset()
class MonkeyPatch: class MonkeyPatch(object):
""" Object returned by the ``monkeypatch`` fixture keeping a record of setattr/item/env/syspath changes. """ Object returned by the ``monkeypatch`` fixture keeping a record of setattr/item/env/syspath changes.
""" """
@ -107,6 +108,29 @@ class MonkeyPatch:
self._cwd = None self._cwd = None
self._savesyspath = None self._savesyspath = None
@contextmanager
def context(self):
"""
Context manager that returns a new :class:`MonkeyPatch` object which
undoes any patching done inside the ``with`` block upon exit:
.. code-block:: python
import functools
def test_partial(monkeypatch):
with monkeypatch.context() as m:
m.setattr(functools, "partial", 3)
Useful in situations where it is desired to undo some patches before the test ends,
such as mocking ``stdlib`` functions that might break pytest itself if mocked (for examples
of this see `#3290 <https://github.com/pytest-dev/pytest/issues/3290>`_.
"""
m = MonkeyPatch()
try:
yield m
finally:
m.undo()
def setattr(self, target, name, value=notset, raising=True): def setattr(self, target, name, value=notset, raising=True):
""" Set attribute value on target, memorizing the old value. """ Set attribute value on target, memorizing the old value.
By default raise AttributeError if the attribute did not exist. By default raise AttributeError if the attribute did not exist.
@ -114,7 +138,7 @@ class MonkeyPatch:
For convenience you can specify a string as ``target`` which For convenience you can specify a string as ``target`` which
will be interpreted as a dotted import path, with the last part will be interpreted as a dotted import path, with the last part
being the attribute name. Example: being the attribute name. Example:
``monkeypatch.setattr("os.getcwd", lambda x: "/")`` ``monkeypatch.setattr("os.getcwd", lambda: "/")``
would set the ``getcwd`` function of the ``os`` module. would set the ``getcwd`` function of the ``os`` module.
The ``raising`` value determines if the setattr should fail The ``raising`` value determines if the setattr should fail
@ -125,7 +149,7 @@ class MonkeyPatch:
import inspect import inspect
if value is notset: if value is notset:
if not isinstance(target, _basestring): if not isinstance(target, six.string_types):
raise TypeError("use setattr(target, name, value) or " raise TypeError("use setattr(target, name, value) or "
"setattr(target, value) with target being a dotted " "setattr(target, value) with target being a dotted "
"import string") "import string")
@ -155,7 +179,7 @@ class MonkeyPatch:
""" """
__tracebackhide__ = True __tracebackhide__ = True
if name is notset: if name is notset:
if not isinstance(target, _basestring): if not isinstance(target, six.string_types):
raise TypeError("use delattr(target, name) or " raise TypeError("use delattr(target, name) or "
"delattr(target) with target being a dotted " "delattr(target) with target being a dotted "
"import string") "import string")

392
_pytest/nodes.py Normal file
View File

@ -0,0 +1,392 @@
from __future__ import absolute_import, division, print_function
import os
import six
import py
import attr
import _pytest
import _pytest._code
from _pytest.mark.structures import NodeKeywords, MarkInfo
SEP = "/"
tracebackcutdir = py.path.local(_pytest.__file__).dirpath()
def _splitnode(nodeid):
"""Split a nodeid into constituent 'parts'.
Node IDs are strings, and can be things like:
''
'testing/code'
'testing/code/test_excinfo.py'
'testing/code/test_excinfo.py::TestFormattedExcinfo::()'
Return values are lists e.g.
[]
['testing', 'code']
['testing', 'code', 'test_excinfo.py']
['testing', 'code', 'test_excinfo.py', 'TestFormattedExcinfo', '()']
"""
if nodeid == '':
# If there is no root node at all, return an empty list so the caller's logic can remain sane
return []
parts = nodeid.split(SEP)
# Replace single last element 'test_foo.py::Bar::()' with multiple elements 'test_foo.py', 'Bar', '()'
parts[-1:] = parts[-1].split("::")
return parts
def ischildnode(baseid, nodeid):
"""Return True if the nodeid is a child node of the baseid.
E.g. 'foo/bar::Baz::()' is a child of 'foo', 'foo/bar' and 'foo/bar::Baz', but not of 'foo/blorp'
"""
base_parts = _splitnode(baseid)
node_parts = _splitnode(nodeid)
if len(node_parts) < len(base_parts):
return False
return node_parts[:len(base_parts)] == base_parts
@attr.s
class _CompatProperty(object):
name = attr.ib()
def __get__(self, obj, owner):
if obj is None:
return self
# TODO: reenable in the features branch
# warnings.warn(
# "usage of {owner!r}.{name} is deprecated, please use pytest.{name} instead".format(
# name=self.name, owner=type(owner).__name__),
# PendingDeprecationWarning, stacklevel=2)
return getattr(__import__('pytest'), self.name)
class Node(object):
""" base class for Collector and Item the test collection tree.
Collector subclasses have children, Items are terminal nodes."""
def __init__(self, name, parent=None, config=None, session=None, fspath=None, nodeid=None):
#: a unique name within the scope of the parent node
self.name = name
#: the parent collector node.
self.parent = parent
#: the pytest config object
self.config = config or parent.config
#: the session this node is part of
self.session = session or parent.session
#: filesystem path where this node was collected from (can be None)
self.fspath = fspath or getattr(parent, 'fspath', None)
#: keywords/markers collected from all scopes
self.keywords = NodeKeywords(self)
#: the marker objects belonging to this node
self.own_markers = []
#: allow adding of extra keywords to use for matching
self.extra_keyword_matches = set()
# used for storing artificial fixturedefs for direct parametrization
self._name2pseudofixturedef = {}
if nodeid is not None:
self._nodeid = nodeid
else:
assert parent is not None
self._nodeid = self.parent.nodeid + "::" + self.name
@property
def ihook(self):
""" fspath sensitive hook proxy used to call pytest hooks"""
return self.session.gethookproxy(self.fspath)
Module = _CompatProperty("Module")
Class = _CompatProperty("Class")
Instance = _CompatProperty("Instance")
Function = _CompatProperty("Function")
File = _CompatProperty("File")
Item = _CompatProperty("Item")
def _getcustomclass(self, name):
maybe_compatprop = getattr(type(self), name)
if isinstance(maybe_compatprop, _CompatProperty):
return getattr(__import__('pytest'), name)
else:
cls = getattr(self, name)
# TODO: reenable in the features branch
# warnings.warn("use of node.%s is deprecated, "
# "use pytest_pycollect_makeitem(...) to create custom "
# "collection nodes" % name, category=DeprecationWarning)
return cls
def __repr__(self):
return "<%s %r>" % (self.__class__.__name__,
getattr(self, 'name', None))
def warn(self, code, message):
""" generate a warning with the given code and message for this
item. """
assert isinstance(code, str)
fslocation = getattr(self, "location", None)
if fslocation is None:
fslocation = getattr(self, "fspath", None)
self.ihook.pytest_logwarning.call_historic(kwargs=dict(
code=code, message=message,
nodeid=self.nodeid, fslocation=fslocation))
# methods for ordering nodes
@property
def nodeid(self):
""" a ::-separated string denoting its collection tree address. """
return self._nodeid
def __hash__(self):
return hash(self.nodeid)
def setup(self):
pass
def teardown(self):
pass
def listchain(self):
""" return list of all parent collectors up to self,
starting from root of collection tree. """
chain = []
item = self
while item is not None:
chain.append(item)
item = item.parent
chain.reverse()
return chain
def add_marker(self, marker):
""" dynamically add a marker object to the node.
``marker`` can be a string or pytest.mark.* instance.
"""
from _pytest.mark import MarkDecorator, MARK_GEN
if isinstance(marker, six.string_types):
marker = getattr(MARK_GEN, marker)
elif not isinstance(marker, MarkDecorator):
raise ValueError("is not a string or pytest.mark.* Marker")
self.keywords[marker.name] = marker
self.own_markers.append(marker)
def iter_markers(self):
"""
iterate over all markers of the node
"""
return (x[1] for x in self.iter_markers_with_node())
def iter_markers_with_node(self):
"""
iterate over all markers of the node
returns sequence of tuples (node, mark)
"""
for node in reversed(self.listchain()):
for mark in node.own_markers:
yield node, mark
def get_marker(self, name):
""" get a marker object from this node or None if
the node doesn't have a marker with that name.
..warning::
deprecated
"""
markers = [x for x in self.iter_markers() if x.name == name]
if markers:
return MarkInfo(markers)
def listextrakeywords(self):
""" Return a set of all extra keywords in self and any parents."""
extra_keywords = set()
for item in self.listchain():
extra_keywords.update(item.extra_keyword_matches)
return extra_keywords
def listnames(self):
return [x.name for x in self.listchain()]
def addfinalizer(self, fin):
""" register a function to be called when this node is finalized.
This method can only be called when this node is active
in a setup chain, for example during self.setup().
"""
self.session._setupstate.addfinalizer(fin, self)
def getparent(self, cls):
""" get the next parent node (including ourself)
which is an instance of the given class"""
current = self
while current and not isinstance(current, cls):
current = current.parent
return current
def _prunetraceback(self, excinfo):
pass
def _repr_failure_py(self, excinfo, style=None):
fm = self.session._fixturemanager
if excinfo.errisinstance(fm.FixtureLookupError):
return excinfo.value.formatrepr()
tbfilter = True
if self.config.option.fulltrace:
style = "long"
else:
tb = _pytest._code.Traceback([excinfo.traceback[-1]])
self._prunetraceback(excinfo)
if len(excinfo.traceback) == 0:
excinfo.traceback = tb
tbfilter = False # prunetraceback already does it
if style == "auto":
style = "long"
# XXX should excinfo.getrepr record all data and toterminal() process it?
if style is None:
if self.config.option.tbstyle == "short":
style = "short"
else:
style = "long"
try:
os.getcwd()
abspath = False
except OSError:
abspath = True
return excinfo.getrepr(funcargs=True, abspath=abspath,
showlocals=self.config.option.showlocals,
style=style, tbfilter=tbfilter)
repr_failure = _repr_failure_py
class Collector(Node):
""" Collector instances create children through collect()
and thus iteratively build a tree.
"""
class CollectError(Exception):
""" an error during collection, contains a custom message. """
def collect(self):
""" returns a list of children (items and collectors)
for this collection node.
"""
raise NotImplementedError("abstract")
def repr_failure(self, excinfo):
""" represent a collection failure. """
if excinfo.errisinstance(self.CollectError):
exc = excinfo.value
return str(exc.args[0])
return self._repr_failure_py(excinfo, style="short")
def _prunetraceback(self, excinfo):
if hasattr(self, 'fspath'):
traceback = excinfo.traceback
ntraceback = traceback.cut(path=self.fspath)
if ntraceback == traceback:
ntraceback = ntraceback.cut(excludepath=tracebackcutdir)
excinfo.traceback = ntraceback.filter()
def _check_initialpaths_for_relpath(session, fspath):
for initial_path in session._initialpaths:
if fspath.common(initial_path) == initial_path:
return fspath.relto(initial_path.dirname)
class FSCollector(Collector):
def __init__(self, fspath, parent=None, config=None, session=None, nodeid=None):
fspath = py.path.local(fspath) # xxx only for test_resultlog.py?
name = fspath.basename
if parent is not None:
rel = fspath.relto(parent.fspath)
if rel:
name = rel
name = name.replace(os.sep, SEP)
self.fspath = fspath
session = session or parent.session
if nodeid is None:
nodeid = self.fspath.relto(session.config.rootdir)
if not nodeid:
nodeid = _check_initialpaths_for_relpath(session, fspath)
if os.sep != SEP:
nodeid = nodeid.replace(os.sep, SEP)
super(FSCollector, self).__init__(name, parent, config, session, nodeid=nodeid, fspath=fspath)
class File(FSCollector):
""" base class for collecting tests from a file. """
class Item(Node):
""" a basic test invocation item. Note that for a single function
there might be multiple test invocation items.
"""
nextitem = None
def __init__(self, name, parent=None, config=None, session=None, nodeid=None):
super(Item, self).__init__(name, parent, config, session, nodeid=nodeid)
self._report_sections = []
#: user properties is a list of tuples (name, value) that holds user
#: defined properties for this test.
self.user_properties = []
def add_report_section(self, when, key, content):
"""
Adds a new report section, similar to what's done internally to add stdout and
stderr captured output::
item.add_report_section("call", "stdout", "report section contents")
:param str when:
One of the possible capture states, ``"setup"``, ``"call"``, ``"teardown"``.
:param str key:
Name of the section, can be customized at will. Pytest uses ``"stdout"`` and
``"stderr"`` internally.
:param str content:
The full contents as a string.
"""
if content:
self._report_sections.append((when, key, content))
def reportinfo(self):
return self.fspath, None, ""
@property
def location(self):
try:
return self._location
except AttributeError:
location = self.reportinfo()
# bestrelpath is a quite slow function
cache = self.config.__dict__.setdefault("_bestrelpathcache", {})
try:
fspath = cache[location[0]]
except KeyError:
fspath = self.session.fspath.bestrelpath(location[0])
cache[location[0]] = fspath
location = (fspath, location[1], str(location[2]))
self._location = location
return location

View File

@ -3,7 +3,6 @@ from __future__ import absolute_import, division, print_function
import sys import sys
import py
from _pytest import unittest, runner, python from _pytest import unittest, runner, python
from _pytest.config import hookimpl from _pytest.config import hookimpl
@ -38,14 +37,15 @@ def pytest_runtest_setup(item):
if not call_optional(item.obj, 'setup'): if not call_optional(item.obj, 'setup'):
# call module level setup if there is no object level one # call module level setup if there is no object level one
call_optional(item.parent.obj, 'setup') call_optional(item.parent.obj, 'setup')
#XXX this implies we only call teardown when setup worked # XXX this implies we only call teardown when setup worked
item.session._setupstate.addfinalizer((lambda: teardown_nose(item)), item) item.session._setupstate.addfinalizer((lambda: teardown_nose(item)), item)
def teardown_nose(item): def teardown_nose(item):
if is_potential_nosetest(item): if is_potential_nosetest(item):
if not call_optional(item.obj, 'teardown'): if not call_optional(item.obj, 'teardown'):
call_optional(item.parent.obj, 'teardown') call_optional(item.parent.obj, 'teardown')
#if hasattr(item.parent, '_nosegensetup'): # if hasattr(item.parent, '_nosegensetup'):
# #call_optional(item._nosegensetup, 'teardown') # #call_optional(item._nosegensetup, 'teardown')
# del item.parent._nosegensetup # del item.parent._nosegensetup
@ -65,7 +65,7 @@ def is_potential_nosetest(item):
def call_optional(obj, name): def call_optional(obj, name):
method = getattr(obj, name, None) method = getattr(obj, name, None)
isfixture = hasattr(method, "_pytestfixturefunction") isfixture = hasattr(method, "_pytestfixturefunction")
if method is not None and not isfixture and py.builtin.callable(method): if method is not None and not isfixture and callable(method):
# If there's any problems allow the exception to raise rather than # If there's any problems allow the exception to raise rather than
# silently ignoring them # silently ignoring them
method() method()

147
_pytest/outcomes.py Normal file
View File

@ -0,0 +1,147 @@
"""
exception classes and constants handling test outcomes
as well as functions creating them
"""
from __future__ import absolute_import, division, print_function
import py
import sys
class OutcomeException(BaseException):
""" OutcomeException and its subclass instances indicate and
contain info about test and collection outcomes.
"""
def __init__(self, msg=None, pytrace=True):
BaseException.__init__(self, msg)
self.msg = msg
self.pytrace = pytrace
def __repr__(self):
if self.msg:
val = self.msg
if isinstance(val, bytes):
val = py._builtin._totext(val, errors='replace')
return val
return "<%s instance>" % (self.__class__.__name__,)
__str__ = __repr__
TEST_OUTCOME = (OutcomeException, Exception)
class Skipped(OutcomeException):
# XXX hackish: on 3k we fake to live in the builtins
# in order to have Skipped exception printing shorter/nicer
__module__ = 'builtins'
def __init__(self, msg=None, pytrace=True, allow_module_level=False):
OutcomeException.__init__(self, msg=msg, pytrace=pytrace)
self.allow_module_level = allow_module_level
class Failed(OutcomeException):
""" raised from an explicit call to pytest.fail() """
__module__ = 'builtins'
class Exit(KeyboardInterrupt):
""" raised for immediate program exits (no tracebacks/summaries)"""
def __init__(self, msg="unknown reason"):
self.msg = msg
KeyboardInterrupt.__init__(self, msg)
# exposed helper methods
def exit(msg):
""" exit testing process as if KeyboardInterrupt was triggered. """
__tracebackhide__ = True
raise Exit(msg)
exit.Exception = Exit
def skip(msg="", **kwargs):
""" skip an executing test with the given message. Note: it's usually
better to use the pytest.mark.skipif marker to declare a test to be
skipped under certain conditions like mismatching platforms or
dependencies. See the pytest_skipping plugin for details.
:kwarg bool allow_module_level: allows this function to be called at
module level, skipping the rest of the module. Default to False.
"""
__tracebackhide__ = True
allow_module_level = kwargs.pop('allow_module_level', False)
if kwargs:
keys = [k for k in kwargs.keys()]
raise TypeError('unexpected keyword arguments: {0}'.format(keys))
raise Skipped(msg=msg, allow_module_level=allow_module_level)
skip.Exception = Skipped
def fail(msg="", pytrace=True):
""" explicitly fail an currently-executing test with the given Message.
:arg pytrace: if false the msg represents the full failure information
and no python traceback will be reported.
"""
__tracebackhide__ = True
raise Failed(msg=msg, pytrace=pytrace)
fail.Exception = Failed
class XFailed(fail.Exception):
""" raised from an explicit call to pytest.xfail() """
def xfail(reason=""):
""" xfail an executing test or setup functions with the given reason."""
__tracebackhide__ = True
raise XFailed(reason)
xfail.Exception = XFailed
def importorskip(modname, minversion=None):
""" return imported module if it has at least "minversion" as its
__version__ attribute. If no minversion is specified the a skip
is only triggered if the module can not be imported.
"""
import warnings
__tracebackhide__ = True
compile(modname, '', 'eval') # to catch syntaxerrors
should_skip = False
with warnings.catch_warnings():
# make sure to ignore ImportWarnings that might happen because
# of existing directories with the same name we're trying to
# import but without a __init__.py file
warnings.simplefilter('ignore')
try:
__import__(modname)
except ImportError:
# Do not raise chained exception here(#1485)
should_skip = True
if should_skip:
raise Skipped("could not import %r" % (modname,), allow_module_level=True)
mod = sys.modules[modname]
if minversion is None:
return mod
verattr = getattr(mod, '__version__', None)
if minversion is not None:
try:
from pkg_resources import parse_version as pv
except ImportError:
raise Skipped("we have a required version for %r but can not import "
"pkg_resources to parse version strings." % (modname,),
allow_module_level=True)
if verattr is None or pv(verattr) < pv(minversion):
raise Skipped("module %r has __version__ %r, required is: %r" % (
modname, verattr, minversion), allow_module_level=True)
return mod

View File

@ -2,6 +2,7 @@
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import pytest import pytest
import six
import sys import sys
import tempfile import tempfile
@ -16,7 +17,6 @@ def pytest_addoption(parser):
@pytest.hookimpl(trylast=True) @pytest.hookimpl(trylast=True)
def pytest_configure(config): def pytest_configure(config):
import py
if config.option.pastebin == "all": if config.option.pastebin == "all":
tr = config.pluginmanager.getplugin('terminalreporter') tr = config.pluginmanager.getplugin('terminalreporter')
# if no terminal reporter plugin is present, nothing we can do here; # if no terminal reporter plugin is present, nothing we can do here;
@ -29,7 +29,7 @@ def pytest_configure(config):
def tee_write(s, **kwargs): def tee_write(s, **kwargs):
oldwrite(s, **kwargs) oldwrite(s, **kwargs)
if py.builtin._istext(s): if isinstance(s, six.text_type):
s = s.encode('utf-8') s = s.encode('utf-8')
config._pastebinfile.write(s) config._pastebinfile.write(s)
@ -97,4 +97,4 @@ def pytest_terminal_summary(terminalreporter):
s = tw.stringio.getvalue() s = tw.stringio.getvalue()
assert len(s) assert len(s)
pastebinurl = create_new_paste(s) pastebinurl = create_new_paste(s)
tr.write_line("%s --> %s" %(msg, pastebinurl)) tr.write_line("%s --> %s" % (msg, pastebinurl))

File diff suppressed because it is too large Load Diff

View File

@ -6,28 +6,41 @@ import inspect
import sys import sys
import os import os
import collections import collections
import warnings
from textwrap import dedent
from itertools import count from itertools import count
import py import py
import six
from _pytest.mark import MarkerError from _pytest.mark import MarkerError
from _pytest.config import hookimpl from _pytest.config import hookimpl
import _pytest import _pytest
import _pytest._pluggy as pluggy import pluggy
from _pytest import fixtures from _pytest import fixtures
from _pytest import main from _pytest import nodes
from _pytest import deprecated
from _pytest.compat import ( from _pytest.compat import (
isclass, isfunction, is_generator, _escape_strings, isclass, isfunction, is_generator, ascii_escaped,
REGEX_TYPE, STRING_TYPES, NoneType, NOTSET, REGEX_TYPE, STRING_TYPES, NoneType, NOTSET,
get_real_func, getfslineno, safe_getattr, get_real_func, getfslineno, safe_getattr,
safe_str, getlocation, enum, safe_str, getlocation, enum,
) )
from _pytest.runner import fail from _pytest.outcomes import fail
from _pytest.mark import transfer_markers from _pytest.mark.structures import transfer_markers, get_unpacked_marks
cutdir1 = py.path.local(pluggy.__file__.rstrip("oc"))
cutdir2 = py.path.local(_pytest.__file__).dirpath() # relative paths that we use to filter traceback entries from appearing to the user;
cutdir3 = py.path.local(py.__file__).dirpath() # see filter_traceback
# note: if we need to add more paths than what we have now we should probably use a list
# for better maintenance
_pluggy_dir = py.path.local(pluggy.__file__.rstrip("oc"))
# pluggy is either a package or a single module depending on the version
if _pluggy_dir.basename == '__init__.py':
_pluggy_dir = _pluggy_dir.dirpath()
_pytest_dir = py.path.local(_pytest.__file__).dirpath()
_py_dir = py.path.local(py.__file__).dirpath()
def filter_traceback(entry): def filter_traceback(entry):
@ -42,11 +55,10 @@ def filter_traceback(entry):
is_generated = '<' in raw_filename and '>' in raw_filename is_generated = '<' in raw_filename and '>' in raw_filename
if is_generated: if is_generated:
return False return False
# entry.path might point to an inexisting file, in which case it will # entry.path might point to an non-existing file, in which case it will
# alsso return a str object. see #1133 # also return a str object. see #1133
p = py.path.local(entry.path) p = py.path.local(entry.path)
return p != cutdir1 and not p.relto(cutdir2) and not p.relto(cutdir3) return not p.relto(_pluggy_dir) and not p.relto(_pytest_dir) and not p.relto(_py_dir)
def pyobj_property(name): def pyobj_property(name):
@ -76,9 +88,9 @@ def pytest_addoption(parser):
parser.addini("python_files", type="args", parser.addini("python_files", type="args",
default=['test_*.py', '*_test.py'], default=['test_*.py', '*_test.py'],
help="glob-style file patterns for Python test module discovery") help="glob-style file patterns for Python test module discovery")
parser.addini("python_classes", type="args", default=["Test",], parser.addini("python_classes", type="args", default=["Test", ],
help="prefixes or glob names for Python test class discovery") help="prefixes or glob names for Python test class discovery")
parser.addini("python_functions", type="args", default=["test",], parser.addini("python_functions", type="args", default=["test", ],
help="prefixes or glob names for Python test function and " help="prefixes or glob names for Python test function and "
"method discovery") "method discovery")
@ -105,13 +117,11 @@ def pytest_generate_tests(metafunc):
if hasattr(metafunc.function, attr): if hasattr(metafunc.function, attr):
msg = "{0} has '{1}', spelling should be 'parametrize'" msg = "{0} has '{1}', spelling should be 'parametrize'"
raise MarkerError(msg.format(metafunc.function.__name__, attr)) raise MarkerError(msg.format(metafunc.function.__name__, attr))
try: for marker in metafunc.definition.iter_markers():
markers = metafunc.function.parametrize if marker.name == 'parametrize':
except AttributeError:
return
for marker in markers:
metafunc.parametrize(*marker.args, **marker.kwargs) metafunc.parametrize(*marker.args, **marker.kwargs)
def pytest_configure(config): def pytest_configure(config):
config.addinivalue_line("markers", config.addinivalue_line("markers",
"parametrize(argnames, argvalues): call a test function multiple " "parametrize(argnames, argvalues): call a test function multiple "
@ -155,9 +165,11 @@ def pytest_collect_file(path, parent):
ihook = parent.session.gethookproxy(path) ihook = parent.session.gethookproxy(path)
return ihook.pytest_pycollect_makemodule(path=path, parent=parent) return ihook.pytest_pycollect_makemodule(path=path, parent=parent)
def pytest_pycollect_makemodule(path, parent): def pytest_pycollect_makemodule(path, parent):
return Module(path, parent) return Module(path, parent)
@hookimpl(hookwrapper=True) @hookimpl(hookwrapper=True)
def pytest_pycollect_makeitem(collector, name, obj): def pytest_pycollect_makeitem(collector, name, obj):
outcome = yield outcome = yield
@ -176,8 +188,7 @@ def pytest_pycollect_makeitem(collector, name, obj):
# or a funtools.wrapped. # or a funtools.wrapped.
# We musn't if it's been wrapped with mock.patch (python 2 only) # We musn't if it's been wrapped with mock.patch (python 2 only)
if not (isfunction(obj) or isfunction(get_real_func(obj))): if not (isfunction(obj) or isfunction(get_real_func(obj))):
collector.warn(code="C2", message= collector.warn(code="C2", message="cannot collect %r because it is not a function."
"cannot collect %r because it is not a function."
% name, ) % name, )
elif getattr(obj, "__test__", True): elif getattr(obj, "__test__", True):
if is_generator(obj): if is_generator(obj):
@ -186,22 +197,32 @@ def pytest_pycollect_makeitem(collector, name, obj):
res = list(collector._genfunctions(name, obj)) res = list(collector._genfunctions(name, obj))
outcome.force_result(res) outcome.force_result(res)
def pytest_make_parametrize_id(config, val, argname=None): def pytest_make_parametrize_id(config, val, argname=None):
return None return None
class PyobjContext(object): class PyobjContext(object):
module = pyobj_property("Module") module = pyobj_property("Module")
cls = pyobj_property("Class") cls = pyobj_property("Class")
instance = pyobj_property("Instance") instance = pyobj_property("Instance")
class PyobjMixin(PyobjContext): class PyobjMixin(PyobjContext):
_ALLOW_MARKERS = True
def __init__(self, *k, **kw):
super(PyobjMixin, self).__init__(*k, **kw)
def obj(): def obj():
def fget(self): def fget(self):
obj = getattr(self, '_obj', None) obj = getattr(self, '_obj', None)
if obj is None: if obj is None:
self._obj = obj = self._getobj() self._obj = obj = self._getobj()
# XXX evil hack
# used to avoid Instance collector marker duplication
if self._ALLOW_MARKERS:
self.own_markers.extend(get_unpacked_marks(self.obj))
return obj return obj
def fset(self, value): def fset(self, value):
@ -253,7 +274,8 @@ class PyobjMixin(PyobjContext):
assert isinstance(lineno, int) assert isinstance(lineno, int)
return fspath, lineno, modpath return fspath, lineno, modpath
class PyCollector(PyobjMixin, main.Collector):
class PyCollector(PyobjMixin, nodes.Collector):
def funcnamefilter(self, name): def funcnamefilter(self, name):
return self._matches_prefix_or_glob_option('python_functions', name) return self._matches_prefix_or_glob_option('python_functions', name)
@ -271,10 +293,22 @@ class PyCollector(PyobjMixin, main.Collector):
return self._matches_prefix_or_glob_option('python_classes', name) return self._matches_prefix_or_glob_option('python_classes', name)
def istestfunction(self, obj, name): def istestfunction(self, obj, name):
if self.funcnamefilter(name) or self.isnosetest(obj):
if isinstance(obj, staticmethod):
# static methods need to be unwrapped
obj = safe_getattr(obj, '__func__', False)
if obj is False:
# Python 2.6 wraps in a different way that we won't try to handle
msg = "cannot collect static method %r because " \
"it is not a function (always the case in Python 2.6)"
self.warn(
code="C2", message=msg % name)
return False
return ( return (
(self.funcnamefilter(name) or self.isnosetest(obj)) and
safe_getattr(obj, "__call__", False) and fixtures.getfixturemarker(obj) is None safe_getattr(obj, "__call__", False) and fixtures.getfixturemarker(obj) is None
) )
else:
return False
def istestclass(self, obj, name): def istestclass(self, obj, name):
return self.classnamefilter(name) or self.isnosetest(obj) return self.classnamefilter(name) or self.isnosetest(obj)
@ -305,23 +339,27 @@ class PyCollector(PyobjMixin, main.Collector):
for basecls in inspect.getmro(self.obj.__class__): for basecls in inspect.getmro(self.obj.__class__):
dicts.append(basecls.__dict__) dicts.append(basecls.__dict__)
seen = {} seen = {}
l = [] values = []
for dic in dicts: for dic in dicts:
for name, obj in list(dic.items()): for name, obj in list(dic.items()):
if name in seen: if name in seen:
continue continue
seen[name] = True seen[name] = True
res = self.makeitem(name, obj) res = self._makeitem(name, obj)
if res is None: if res is None:
continue continue
if not isinstance(res, list): if not isinstance(res, list):
res = [res] res = [res]
l.extend(res) values.extend(res)
l.sort(key=lambda item: item.reportinfo()[:2]) values.sort(key=lambda item: item.reportinfo()[:2])
return l return values
def makeitem(self, name, obj): def makeitem(self, name, obj):
#assert self.ihook.fspath == self.fspath, self warnings.warn(deprecated.COLLECTOR_MAKEITEM, stacklevel=2)
self._makeitem(name, obj)
def _makeitem(self, name, obj):
# assert self.ihook.fspath == self.fspath, self
return self.ihook.pytest_pycollect_makeitem( return self.ihook.pytest_pycollect_makeitem(
collector=self, name=name, obj=obj) collector=self, name=name, obj=obj)
@ -331,9 +369,15 @@ class PyCollector(PyobjMixin, main.Collector):
cls = clscol and clscol.obj or None cls = clscol and clscol.obj or None
transfer_markers(funcobj, cls, module) transfer_markers(funcobj, cls, module)
fm = self.session._fixturemanager fm = self.session._fixturemanager
fixtureinfo = fm.getfixtureinfo(self, funcobj, cls)
metafunc = Metafunc(funcobj, fixtureinfo, self.config, definition = FunctionDefinition(
cls=cls, module=module) name=name,
parent=self,
callobj=funcobj,
)
fixtureinfo = fm.getfixtureinfo(definition, funcobj, cls)
metafunc = Metafunc(definition, fixtureinfo, self.config, cls=cls, module=module)
methods = [] methods = []
if hasattr(module, "pytest_generate_tests"): if hasattr(module, "pytest_generate_tests"):
methods.append(module.pytest_generate_tests) methods.append(module.pytest_generate_tests)
@ -357,12 +401,12 @@ class PyCollector(PyobjMixin, main.Collector):
yield Function(name=subname, parent=self, yield Function(name=subname, parent=self,
callspec=callspec, callobj=funcobj, callspec=callspec, callobj=funcobj,
fixtureinfo=fixtureinfo, fixtureinfo=fixtureinfo,
keywords={callspec.id:True}, keywords={callspec.id: True},
originalname=name, originalname=name,
) )
class Module(main.File, PyCollector): class Module(nodes.File, PyCollector):
""" Collector for test classes and functions. """ """ Collector for test classes and functions. """
def _getobj(self): def _getobj(self):
@ -409,9 +453,10 @@ class Module(main.File, PyCollector):
if e.allow_module_level: if e.allow_module_level:
raise raise
raise self.CollectError( raise self.CollectError(
"Using pytest.skip outside of a test is not allowed. If you are " "Using pytest.skip outside of a test is not allowed. "
"trying to decorate a test function, use the @pytest.mark.skip " "To decorate a test function, use the @pytest.mark.skip "
"or @pytest.mark.skipif decorators instead." "or @pytest.mark.skipif decorators instead, and to skip a "
"module use `pytestmark = pytest.mark.{skip,skipif}."
) )
self.config.pluginmanager.consider_module(mod) self.config.pluginmanager.consider_module(mod)
return mod return mod
@ -462,6 +507,7 @@ def _get_xunit_func(obj, name):
class Class(PyCollector): class Class(PyCollector):
""" Collector for test methods. """ """ Collector for test methods. """
def collect(self): def collect(self):
if not safe_getattr(self.obj, "__test__", True): if not safe_getattr(self.obj, "__test__", True):
return [] return []
@ -488,7 +534,13 @@ class Class(PyCollector):
fin_class = getattr(fin_class, '__func__', fin_class) fin_class = getattr(fin_class, '__func__', fin_class)
self.addfinalizer(lambda: fin_class(self.obj)) self.addfinalizer(lambda: fin_class(self.obj))
class Instance(PyCollector): class Instance(PyCollector):
_ALLOW_MARKERS = False # hack, destroy later
# instances share the object with their parents in a way
# that duplicates markers instances if not taken out
# can be removed at node strucutre reorganization time
def _getobj(self): def _getobj(self):
return self.parent.obj() return self.parent.obj()
@ -500,6 +552,7 @@ class Instance(PyCollector):
self.obj = self._getobj() self.obj = self._getobj()
return self.obj return self.obj
class FunctionMixin(PyobjMixin): class FunctionMixin(PyobjMixin):
""" mixin for the code common to Function and Generator. """ mixin for the code common to Function and Generator.
""" """
@ -535,7 +588,6 @@ class FunctionMixin(PyobjMixin):
if ntraceback == traceback: if ntraceback == traceback:
ntraceback = ntraceback.cut(path=path) ntraceback = ntraceback.cut(path=path)
if ntraceback == traceback: if ntraceback == traceback:
#ntraceback = ntraceback.cut(excludepath=cutdir2)
ntraceback = ntraceback.filter(filter_traceback) ntraceback = ntraceback.filter(filter_traceback)
if not ntraceback: if not ntraceback:
ntraceback = traceback ntraceback = traceback
@ -572,28 +624,28 @@ class Generator(FunctionMixin, PyCollector):
self.session._setupstate.prepare(self) self.session._setupstate.prepare(self)
# see FunctionMixin.setup and test_setupstate_is_preserved_134 # see FunctionMixin.setup and test_setupstate_is_preserved_134
self._preservedparent = self.parent.obj self._preservedparent = self.parent.obj
l = [] values = []
seen = {} seen = {}
for i, x in enumerate(self.obj()): for i, x in enumerate(self.obj()):
name, call, args = self.getcallargs(x) name, call, args = self.getcallargs(x)
if not callable(call): if not callable(call):
raise TypeError("%r yielded non callable test %r" %(self.obj, call,)) raise TypeError("%r yielded non callable test %r" % (self.obj, call,))
if name is None: if name is None:
name = "[%d]" % i name = "[%d]" % i
else: else:
name = "['%s']" % name name = "['%s']" % name
if name in seen: if name in seen:
raise ValueError("%r generated tests with non-unique name %r" %(self, name)) raise ValueError("%r generated tests with non-unique name %r" % (self, name))
seen[name] = True seen[name] = True
l.append(self.Function(name, self, args=args, callobj=call)) values.append(self.Function(name, self, args=args, callobj=call))
self.config.warn('C1', deprecated.YIELD_TESTS, fslocation=self.fspath) self.warn('C1', deprecated.YIELD_TESTS)
return l return values
def getcallargs(self, obj): def getcallargs(self, obj):
if not isinstance(obj, (tuple, list)): if not isinstance(obj, (tuple, list)):
obj = (obj,) obj = (obj,)
# explicit naming # explicit naming
if isinstance(obj[0], py.builtin._basestring): if isinstance(obj[0], six.string_types):
name = obj[0] name = obj[0]
obj = obj[1:] obj = obj[1:]
else: else:
@ -624,14 +676,14 @@ class CallSpec2(object):
self._globalid_args = set() self._globalid_args = set()
self._globalparam = NOTSET self._globalparam = NOTSET
self._arg2scopenum = {} # used for sorting parametrized resources self._arg2scopenum = {} # used for sorting parametrized resources
self.keywords = {} self.marks = []
self.indices = {} self.indices = {}
def copy(self, metafunc): def copy(self, metafunc):
cs = CallSpec2(self.metafunc) cs = CallSpec2(self.metafunc)
cs.funcargs.update(self.funcargs) cs.funcargs.update(self.funcargs)
cs.params.update(self.params) cs.params.update(self.params)
cs.keywords.update(self.keywords) cs.marks.extend(self.marks)
cs.indices.update(self.indices) cs.indices.update(self.indices)
cs._arg2scopenum.update(self._arg2scopenum) cs._arg2scopenum.update(self._arg2scopenum)
cs._idlist = list(self._idlist) cs._idlist = list(self._idlist)
@ -642,7 +694,7 @@ class CallSpec2(object):
def _checkargnotcontained(self, arg): def _checkargnotcontained(self, arg):
if arg in self.params or arg in self.funcargs: if arg in self.params or arg in self.funcargs:
raise ValueError("duplicate %r" %(arg,)) raise ValueError("duplicate %r" % (arg,))
def getparam(self, name): def getparam(self, name):
try: try:
@ -656,16 +708,16 @@ class CallSpec2(object):
def id(self): def id(self):
return "-".join(map(str, filter(None, self._idlist))) return "-".join(map(str, filter(None, self._idlist)))
def setmulti(self, valtypes, argnames, valset, id, keywords, scopenum, def setmulti2(self, valtypes, argnames, valset, id, marks, scopenum,
param_index): param_index):
for arg,val in zip(argnames, valset): for arg, val in zip(argnames, valset):
self._checkargnotcontained(arg) self._checkargnotcontained(arg)
valtype_for_arg = valtypes[arg] valtype_for_arg = valtypes[arg]
getattr(self, valtype_for_arg)[arg] = val getattr(self, valtype_for_arg)[arg] = val
self.indices[arg] = param_index self.indices[arg] = param_index
self._arg2scopenum[arg] = scopenum self._arg2scopenum[arg] = scopenum
self._idlist.append(id) self._idlist.append(id)
self.keywords.update(keywords) self.marks.extend(marks)
def setall(self, funcargs, id, param): def setall(self, funcargs, id, param):
for x in funcargs: for x in funcargs:
@ -682,20 +734,23 @@ class CallSpec2(object):
class Metafunc(fixtures.FuncargnamesCompatAttr): class Metafunc(fixtures.FuncargnamesCompatAttr):
""" """
Metafunc objects are passed to the ``pytest_generate_tests`` hook. Metafunc objects are passed to the :func:`pytest_generate_tests <_pytest.hookspec.pytest_generate_tests>` hook.
They help to inspect a test function and to generate tests according to They help to inspect a test function and to generate tests according to
test configuration or values specified in the class or module where a test configuration or values specified in the class or module where a
test function is defined. test function is defined.
""" """
def __init__(self, function, fixtureinfo, config, cls=None, module=None):
def __init__(self, definition, fixtureinfo, config, cls=None, module=None):
#: access to the :class:`_pytest.config.Config` object for the test session #: access to the :class:`_pytest.config.Config` object for the test session
assert isinstance(definition, FunctionDefinition) or type(definition).__name__ == "DefinitionMock"
self.definition = definition
self.config = config self.config = config
#: the module object where the test function is defined in. #: the module object where the test function is defined in.
self.module = module self.module = module
#: underlying python test function #: underlying python test function
self.function = function self.function = definition.obj
#: set of fixture names required by the test function #: set of fixture names required by the test function
self.fixturenames = fixtureinfo.names_closure self.fixturenames = fixtureinfo.names_closure
@ -704,7 +759,7 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
self.cls = cls self.cls = cls
self._calls = [] self._calls = []
self._ids = py.builtin.set() self._ids = set()
self._arg2fixturedefs = fixtureinfo.name2fixturedefs self._arg2fixturedefs = fixtureinfo.name2fixturedefs
def parametrize(self, argnames, argvalues, indirect=False, ids=None, def parametrize(self, argnames, argvalues, indirect=False, ids=None,
@ -747,30 +802,13 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
to set a dynamic scope using test context or configuration. to set a dynamic scope using test context or configuration.
""" """
from _pytest.fixtures import scope2index from _pytest.fixtures import scope2index
from _pytest.mark import MARK_GEN, ParameterSet from _pytest.mark import ParameterSet
from py.io import saferepr from py.io import saferepr
if not isinstance(argnames, (tuple, list)): argnames, parameters = ParameterSet._for_parametrize(
argnames = [x.strip() for x in argnames.split(",") if x.strip()] argnames, argvalues, self.function, self.config)
force_tuple = len(argnames) == 1
else:
force_tuple = False
parameters = [
ParameterSet.extract_from(x, legacy_force_tuple=force_tuple)
for x in argvalues]
del argvalues del argvalues
if not parameters:
fs, lineno = getfslineno(self.function)
reason = "got empty parameter set %r, function %s at %s:%d" % (
argnames, self.function.__name__, fs, lineno)
mark = MARK_GEN.skip(reason=reason)
parameters.append(ParameterSet(
values=(NOTSET,) * len(argnames),
marks=[mark],
id=None,
))
if scope is None: if scope is None:
scope = _find_parametrized_scope(argnames, self._arg2fixturedefs, indirect) scope = _find_parametrized_scope(argnames, self._arg2fixturedefs, indirect)
@ -806,7 +844,7 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
raise ValueError('%d tests specified with %d ids' % ( raise ValueError('%d tests specified with %d ids' % (
len(parameters), len(ids))) len(parameters), len(ids)))
for id_value in ids: for id_value in ids:
if id_value is not None and not isinstance(id_value, py.builtin._basestring): if id_value is not None and not isinstance(id_value, six.string_types):
msg = 'ids must be list of strings, found: %s (type: %s)' msg = 'ids must be list of strings, found: %s (type: %s)'
raise ValueError(msg % (saferepr(id_value), type(id_value).__name__)) raise ValueError(msg % (saferepr(id_value), type(id_value).__name__))
ids = idmaker(argnames, parameters, idfn, ids, self.config) ids = idmaker(argnames, parameters, idfn, ids, self.config)
@ -820,15 +858,19 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
'equal to the number of names ({1})'.format( 'equal to the number of names ({1})'.format(
param.values, argnames)) param.values, argnames))
newcallspec = callspec.copy(self) newcallspec = callspec.copy(self)
newcallspec.setmulti(valtypes, argnames, param.values, a_id, newcallspec.setmulti2(valtypes, argnames, param.values, a_id,
param.deprecated_arg_dict, scopenum, param_index) param.marks, scopenum, param_index)
newcalls.append(newcallspec) newcalls.append(newcallspec)
self._calls = newcalls self._calls = newcalls
def addcall(self, funcargs=None, id=NOTSET, param=NOTSET): def addcall(self, funcargs=None, id=NOTSET, param=NOTSET):
""" (deprecated, use parametrize) Add a new call to the underlying """ Add a new call to the underlying test function during the collection phase of a test run.
test function during the collection phase of a test run. Note that
request.addcall() is called during the test collection phase prior and .. deprecated:: 3.3
Use :meth:`parametrize` instead.
Note that request.addcall() is called during the test collection phase prior and
independently to actual test execution. You should only use addcall() independently to actual test execution. You should only use addcall()
if you need to specify multiple arguments of a test function. if you need to specify multiple arguments of a test function.
@ -841,6 +883,8 @@ class Metafunc(fixtures.FuncargnamesCompatAttr):
:arg param: a parameter which will be exposed to a later fixture function :arg param: a parameter which will be exposed to a later fixture function
invocation through the ``request.param`` attribute. invocation through the ``request.param`` attribute.
""" """
if self.config:
self.config.warn('C1', message=deprecated.METAFUNC_ADD_CALL, fslocation=None)
assert funcargs is None or isinstance(funcargs, dict) assert funcargs is None or isinstance(funcargs, dict)
if funcargs is not None: if funcargs is not None:
for name in funcargs: for name in funcargs:
@ -900,7 +944,7 @@ def _idval(val, argname, idx, idfn, config=None):
msg += '\nUpdate your code as this will raise an error in pytest-4.0.' msg += '\nUpdate your code as this will raise an error in pytest-4.0.'
warnings.warn(msg, DeprecationWarning) warnings.warn(msg, DeprecationWarning)
if s: if s:
return _escape_strings(s) return ascii_escaped(s)
if config: if config:
hook_id = config.hook.pytest_make_parametrize_id( hook_id = config.hook.pytest_make_parametrize_id(
@ -909,16 +953,16 @@ def _idval(val, argname, idx, idfn, config=None):
return hook_id return hook_id
if isinstance(val, STRING_TYPES): if isinstance(val, STRING_TYPES):
return _escape_strings(val) return ascii_escaped(val)
elif isinstance(val, (float, int, bool, NoneType)): elif isinstance(val, (float, int, bool, NoneType)):
return str(val) return str(val)
elif isinstance(val, REGEX_TYPE): elif isinstance(val, REGEX_TYPE):
return _escape_strings(val.pattern) return ascii_escaped(val.pattern)
elif enum is not None and isinstance(val, enum.Enum): elif enum is not None and isinstance(val, enum.Enum):
return str(val) return str(val)
elif isclass(val) and hasattr(val, '__name__'): elif (isclass(val) or isfunction(val)) and hasattr(val, '__name__'):
return val.__name__ return val.__name__
return str(argname)+str(idx) return str(argname) + str(idx)
def _idvalset(idx, parameterset, argnames, idfn, ids, config=None): def _idvalset(idx, parameterset, argnames, idfn, ids, config=None):
@ -929,7 +973,7 @@ def _idvalset(idx, parameterset, argnames, idfn, ids, config=None):
for val, argname in zip(parameterset.values, argnames)] for val, argname in zip(parameterset.values, argnames)]
return "-".join(this_id) return "-".join(this_id)
else: else:
return _escape_strings(ids[idx]) return ascii_escaped(ids[idx])
def idmaker(argnames, parametersets, idfn=None, ids=None, config=None): def idmaker(argnames, parametersets, idfn=None, ids=None, config=None):
@ -958,52 +1002,48 @@ def _show_fixtures_per_test(config, session):
tw = _pytest.config.create_terminal_writer(config) tw = _pytest.config.create_terminal_writer(config)
verbose = config.getvalue("verbose") verbose = config.getvalue("verbose")
def get_best_rel(func): def get_best_relpath(func):
loc = getlocation(func, curdir) loc = getlocation(func, curdir)
return curdir.bestrelpath(loc) return curdir.bestrelpath(loc)
def write_fixture(fixture_def): def write_fixture(fixture_def):
argname = fixture_def.argname argname = fixture_def.argname
if verbose <= 0 and argname.startswith("_"): if verbose <= 0 and argname.startswith("_"):
return return
if verbose > 0: if verbose > 0:
bestrel = get_best_rel(fixture_def.func) bestrel = get_best_relpath(fixture_def.func)
funcargspec = "{0} -- {1}".format(argname, bestrel) funcargspec = "{0} -- {1}".format(argname, bestrel)
else: else:
funcargspec = argname funcargspec = argname
tw.line(funcargspec, green=True) tw.line(funcargspec, green=True)
INDENT = ' {0}'
fixture_doc = fixture_def.func.__doc__ fixture_doc = fixture_def.func.__doc__
if fixture_doc: if fixture_doc:
for line in fixture_doc.strip().split('\n'): write_docstring(tw, fixture_doc)
tw.line(INDENT.format(line.strip()))
else: else:
tw.line(INDENT.format('no docstring available'), red=True) tw.line(' no docstring available', red=True)
def write_item(item): def write_item(item):
name2fixturedefs = item._fixtureinfo.name2fixturedefs try:
info = item._fixtureinfo
if not name2fixturedefs: except AttributeError:
# The given test item does not use any fixtures # doctests items have no _fixtureinfo attribute
return
if not info.name2fixturedefs:
# this test item does not use any fixtures
return return
bestrel = get_best_rel(item.function)
tw.line() tw.line()
tw.sep('-', 'fixtures used by {0}'.format(item.name)) tw.sep('-', 'fixtures used by {0}'.format(item.name))
tw.sep('-', '({0})'.format(bestrel)) tw.sep('-', '({0})'.format(get_best_relpath(item.function)))
for argname, fixture_defs in sorted(name2fixturedefs.items()): # dict key not used in loop but needed for sorting
assert fixture_defs is not None for _, fixturedefs in sorted(info.name2fixturedefs.items()):
if not fixture_defs: assert fixturedefs is not None
if not fixturedefs:
continue continue
# The last fixture def item in the list is expected # last item is expected to be the one used by the test item
# to be the one used by the test item write_fixture(fixturedefs[-1])
write_fixture(fixture_defs[-1])
for item in session.items: for session_item in session.items:
write_item(item) write_item(session_item)
def showfixtures(config): def showfixtures(config):
@ -1043,35 +1083,48 @@ def _showfixtures_main(config, session):
if currentmodule != module: if currentmodule != module:
if not module.startswith("_pytest."): if not module.startswith("_pytest."):
tw.line() tw.line()
tw.sep("-", "fixtures defined from %s" %(module,)) tw.sep("-", "fixtures defined from %s" % (module,))
currentmodule = module currentmodule = module
if verbose <= 0 and argname[0] == "_": if verbose <= 0 and argname[0] == "_":
continue continue
if verbose > 0: if verbose > 0:
funcargspec = "%s -- %s" %(argname, bestrel,) funcargspec = "%s -- %s" % (argname, bestrel,)
else: else:
funcargspec = argname funcargspec = argname
tw.line(funcargspec, green=True) tw.line(funcargspec, green=True)
loc = getlocation(fixturedef.func, curdir) loc = getlocation(fixturedef.func, curdir)
doc = fixturedef.func.__doc__ or "" doc = fixturedef.func.__doc__ or ""
if doc: if doc:
for line in doc.strip().split("\n"): write_docstring(tw, doc)
tw.line(" " + line.strip())
else: else:
tw.line(" %s: no docstring available" %(loc,), tw.line(" %s: no docstring available" % (loc,),
red=True) red=True)
def write_docstring(tw, doc):
INDENT = " "
doc = doc.rstrip()
if "\n" in doc:
firstline, rest = doc.split("\n", 1)
else:
firstline, rest = doc, ""
# if firstline.strip():
# the basic pytest Function item tw.line(INDENT + firstline.strip())
#
class Function(FunctionMixin, main.Item, fixtures.FuncargnamesCompatAttr): if rest:
for line in dedent(rest).split("\n"):
tw.write(INDENT + line + "\n")
class Function(FunctionMixin, nodes.Item, fixtures.FuncargnamesCompatAttr):
""" a Function Item is responsible for setting up and executing a """ a Function Item is responsible for setting up and executing a
Python test function. Python test function.
""" """
_genid = None _genid = None
# disable since functions handle it themselfes
_ALLOW_MARKERS = False
def __init__(self, name, parent, args=None, config=None, def __init__(self, name, parent, args=None, config=None,
callspec=None, callobj=NOTSET, keywords=None, session=None, callspec=None, callobj=NOTSET, keywords=None, session=None,
fixtureinfo=None, originalname=None): fixtureinfo=None, originalname=None):
@ -1082,9 +1135,17 @@ class Function(FunctionMixin, main.Item, fixtures.FuncargnamesCompatAttr):
self.obj = callobj self.obj = callobj
self.keywords.update(self.obj.__dict__) self.keywords.update(self.obj.__dict__)
self.own_markers.extend(get_unpacked_marks(self.obj))
if callspec: if callspec:
self.callspec = callspec self.callspec = callspec
self.keywords.update(callspec.keywords) # this is total hostile and a mess
# keywords are broken by design by now
# this will be redeemed later
for mark in callspec.marks:
# feel free to cry, this was broken for years before
# and keywords cant fix it per design
self.keywords[mark.name] = mark
self.own_markers.extend(callspec.marks)
if keywords: if keywords:
self.keywords.update(keywords) self.keywords.update(keywords)
@ -1143,3 +1204,15 @@ class Function(FunctionMixin, main.Item, fixtures.FuncargnamesCompatAttr):
def setup(self): def setup(self):
super(Function, self).setup() super(Function, self).setup()
fixtures.fillfixtures(self) fixtures.fillfixtures(self)
class FunctionDefinition(Function):
"""
internal hack until we get actual definition nodes instead of the
crappy metafunc hack
"""
def runtest(self):
raise RuntimeError("function definitions are not supposed to be used")
setup = runtest

View File

@ -2,14 +2,278 @@ import math
import sys import sys
import py import py
from six import binary_type, text_type
from six.moves import zip, filterfalse
from more_itertools.more import always_iterable
from _pytest.compat import isclass from _pytest.compat import isclass
from _pytest.runner import fail from _pytest.outcomes import fail
import _pytest._code import _pytest._code
def _cmp_raises_type_error(self, other):
"""__cmp__ implementation which raises TypeError. Used
by Approx base classes to implement only == and != and raise a
TypeError for other comparisons.
Needed in Python 2 only, Python 3 all it takes is not implementing the
other operators at all.
"""
__tracebackhide__ = True
raise TypeError('Comparison operators other than == and != not supported by approx objects')
# builtin pytest.approx helper # builtin pytest.approx helper
class approx(object): class ApproxBase(object):
"""
Provide shared utilities for making approximate comparisons between numbers
or sequences of numbers.
"""
# Tell numpy to use our `__eq__` operator instead of its
__array_ufunc__ = None
__array_priority__ = 100
def __init__(self, expected, rel=None, abs=None, nan_ok=False):
self.expected = expected
self.abs = abs
self.rel = rel
self.nan_ok = nan_ok
def __repr__(self):
raise NotImplementedError
def __eq__(self, actual):
return all(
a == self._approx_scalar(x)
for a, x in self._yield_comparisons(actual))
__hash__ = None
def __ne__(self, actual):
return not (actual == self)
if sys.version_info[0] == 2:
__cmp__ = _cmp_raises_type_error
def _approx_scalar(self, x):
return ApproxScalar(x, rel=self.rel, abs=self.abs, nan_ok=self.nan_ok)
def _yield_comparisons(self, actual):
"""
Yield all the pairs of numbers to be compared. This is used to
implement the `__eq__` method.
"""
raise NotImplementedError
class ApproxNumpy(ApproxBase):
"""
Perform approximate comparisons for numpy arrays.
"""
def __repr__(self):
# It might be nice to rewrite this function to account for the
# shape of the array...
import numpy as np
return "approx({0!r})".format(list(
self._approx_scalar(x) for x in np.asarray(self.expected)))
if sys.version_info[0] == 2:
__cmp__ = _cmp_raises_type_error
def __eq__(self, actual):
import numpy as np
# self.expected is supposed to always be an array here
if not np.isscalar(actual):
try:
actual = np.asarray(actual)
except: # noqa
raise TypeError("cannot compare '{0}' to numpy.ndarray".format(actual))
if not np.isscalar(actual) and actual.shape != self.expected.shape:
return False
return ApproxBase.__eq__(self, actual)
def _yield_comparisons(self, actual):
import numpy as np
# `actual` can either be a numpy array or a scalar, it is treated in
# `__eq__` before being passed to `ApproxBase.__eq__`, which is the
# only method that calls this one.
if np.isscalar(actual):
for i in np.ndindex(self.expected.shape):
yield actual, np.asscalar(self.expected[i])
else:
for i in np.ndindex(self.expected.shape):
yield np.asscalar(actual[i]), np.asscalar(self.expected[i])
class ApproxMapping(ApproxBase):
"""
Perform approximate comparisons for mappings where the values are numbers
(the keys can be anything).
"""
def __repr__(self):
return "approx({0!r})".format(dict(
(k, self._approx_scalar(v))
for k, v in self.expected.items()))
def __eq__(self, actual):
if set(actual.keys()) != set(self.expected.keys()):
return False
return ApproxBase.__eq__(self, actual)
def _yield_comparisons(self, actual):
for k in self.expected.keys():
yield actual[k], self.expected[k]
class ApproxSequence(ApproxBase):
"""
Perform approximate comparisons for sequences of numbers.
"""
def __repr__(self):
seq_type = type(self.expected)
if seq_type not in (tuple, list, set):
seq_type = list
return "approx({0!r})".format(seq_type(
self._approx_scalar(x) for x in self.expected))
def __eq__(self, actual):
if len(actual) != len(self.expected):
return False
return ApproxBase.__eq__(self, actual)
def _yield_comparisons(self, actual):
return zip(actual, self.expected)
class ApproxScalar(ApproxBase):
"""
Perform approximate comparisons for single numbers only.
"""
DEFAULT_ABSOLUTE_TOLERANCE = 1e-12
DEFAULT_RELATIVE_TOLERANCE = 1e-6
def __repr__(self):
"""
Return a string communicating both the expected value and the tolerance
for the comparison being made, e.g. '1.0 +- 1e-6'. Use the unicode
plus/minus symbol if this is python3 (it's too hard to get right for
python2).
"""
if isinstance(self.expected, complex):
return str(self.expected)
# Infinities aren't compared using tolerances, so don't show a
# tolerance.
if math.isinf(self.expected):
return str(self.expected)
# If a sensible tolerance can't be calculated, self.tolerance will
# raise a ValueError. In this case, display '???'.
try:
vetted_tolerance = '{:.1e}'.format(self.tolerance)
except ValueError:
vetted_tolerance = '???'
if sys.version_info[0] == 2:
return '{0} +- {1}'.format(self.expected, vetted_tolerance)
else:
return u'{0} \u00b1 {1}'.format(self.expected, vetted_tolerance)
def __eq__(self, actual):
"""
Return true if the given value is equal to the expected value within
the pre-specified tolerance.
"""
if _is_numpy_array(actual):
return ApproxNumpy(actual, self.abs, self.rel, self.nan_ok) == self.expected
# Short-circuit exact equality.
if actual == self.expected:
return True
# Allow the user to control whether NaNs are considered equal to each
# other or not. The abs() calls are for compatibility with complex
# numbers.
if math.isnan(abs(self.expected)):
return self.nan_ok and math.isnan(abs(actual))
# Infinity shouldn't be approximately equal to anything but itself, but
# if there's a relative tolerance, it will be infinite and infinity
# will seem approximately equal to everything. The equal-to-itself
# case would have been short circuited above, so here we can just
# return false if the expected value is infinite. The abs() call is
# for compatibility with complex numbers.
if math.isinf(abs(self.expected)):
return False
# Return true if the two numbers are within the tolerance.
return abs(self.expected - actual) <= self.tolerance
__hash__ = None
@property
def tolerance(self):
"""
Return the tolerance for the comparison. This could be either an
absolute tolerance or a relative tolerance, depending on what the user
specified or which would be larger.
"""
def set_default(x, default):
return x if x is not None else default
# Figure out what the absolute tolerance should be. ``self.abs`` is
# either None or a value specified by the user.
absolute_tolerance = set_default(self.abs, self.DEFAULT_ABSOLUTE_TOLERANCE)
if absolute_tolerance < 0:
raise ValueError("absolute tolerance can't be negative: {}".format(absolute_tolerance))
if math.isnan(absolute_tolerance):
raise ValueError("absolute tolerance can't be NaN.")
# If the user specified an absolute tolerance but not a relative one,
# just return the absolute tolerance.
if self.rel is None:
if self.abs is not None:
return absolute_tolerance
# Figure out what the relative tolerance should be. ``self.rel`` is
# either None or a value specified by the user. This is done after
# we've made sure the user didn't ask for an absolute tolerance only,
# because we don't want to raise errors about the relative tolerance if
# we aren't even going to use it.
relative_tolerance = set_default(self.rel, self.DEFAULT_RELATIVE_TOLERANCE) * abs(self.expected)
if relative_tolerance < 0:
raise ValueError("relative tolerance can't be negative: {}".format(absolute_tolerance))
if math.isnan(relative_tolerance):
raise ValueError("relative tolerance can't be NaN.")
# Return the larger of the relative and absolute tolerances.
return max(relative_tolerance, absolute_tolerance)
class ApproxDecimal(ApproxScalar):
from decimal import Decimal
DEFAULT_ABSOLUTE_TOLERANCE = Decimal('1e-12')
DEFAULT_RELATIVE_TOLERANCE = Decimal('1e-6')
def approx(expected, rel=None, abs=None, nan_ok=False):
""" """
Assert that two numbers (or two sets of numbers) are equal to each other Assert that two numbers (or two sets of numbers) are equal to each other
within some tolerance. within some tolerance.
@ -45,21 +309,42 @@ class approx(object):
>>> 0.1 + 0.2 == approx(0.3) >>> 0.1 + 0.2 == approx(0.3)
True True
The same syntax also works on sequences of numbers:: The same syntax also works for sequences of numbers::
>>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6)) >>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6))
True True
Dictionary *values*::
>>> {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == approx({'a': 0.3, 'b': 0.6})
True
``numpy`` arrays::
>>> import numpy as np # doctest: +SKIP
>>> np.array([0.1, 0.2]) + np.array([0.2, 0.4]) == approx(np.array([0.3, 0.6])) # doctest: +SKIP
True
And for a ``numpy`` array against a scalar::
>>> import numpy as np # doctest: +SKIP
>>> np.array([0.1, 0.2]) + np.array([0.2, 0.1]) == approx(0.3) # doctest: +SKIP
True
By default, ``approx`` considers numbers within a relative tolerance of By default, ``approx`` considers numbers within a relative tolerance of
``1e-6`` (i.e. one part in a million) of its expected value to be equal. ``1e-6`` (i.e. one part in a million) of its expected value to be equal.
This treatment would lead to surprising results if the expected value was This treatment would lead to surprising results if the expected value was
``0.0``, because nothing but ``0.0`` itself is relatively close to ``0.0``. ``0.0``, because nothing but ``0.0`` itself is relatively close to ``0.0``.
To handle this case less surprisingly, ``approx`` also considers numbers To handle this case less surprisingly, ``approx`` also considers numbers
within an absolute tolerance of ``1e-12`` of its expected value to be within an absolute tolerance of ``1e-12`` of its expected value to be
equal. Infinite numbers are another special case. They are only equal. Infinity and NaN are special cases. Infinity is only considered
considered equal to themselves, regardless of the relative tolerance. Both equal to itself, regardless of the relative tolerance. NaN is not
the relative and absolute tolerances can be changed by passing arguments to considered equal to anything by default, but you can make it be equal to
the ``approx`` constructor:: itself by setting the ``nan_ok`` argument to True. (This is meant to
facilitate comparing arrays that use NaN to mean "no data".)
Both the relative and absolute tolerances can be changed by passing
arguments to the ``approx`` constructor::
>>> 1.0001 == approx(1) >>> 1.0001 == approx(1)
False False
@ -121,140 +406,75 @@ class approx(object):
is asymmetric and you can think of ``b`` as the reference value. In the is asymmetric and you can think of ``b`` as the reference value. In the
special case that you explicitly specify an absolute tolerance but not a special case that you explicitly specify an absolute tolerance but not a
relative tolerance, only the absolute tolerance is considered. relative tolerance, only the absolute tolerance is considered.
.. warning::
.. versionchanged:: 3.2
In order to avoid inconsistent behavior, ``TypeError`` is
raised for ``>``, ``>=``, ``<`` and ``<=`` comparisons.
The example below illustrates the problem::
assert approx(0.1) > 0.1 + 1e-10 # calls approx(0.1).__gt__(0.1 + 1e-10)
assert 0.1 + 1e-10 > approx(0.1) # calls approx(0.1).__lt__(0.1 + 1e-10)
In the second example one expects ``approx(0.1).__le__(0.1 + 1e-10)``
to be called. But instead, ``approx(0.1).__lt__(0.1 + 1e-10)`` is used to
comparison. This is because the call hierarchy of rich comparisons
follows a fixed behavior. `More information...`__
__ https://docs.python.org/3/reference/datamodel.html#object.__ge__
""" """
def __init__(self, expected, rel=None, abs=None): from collections import Mapping, Sequence
self.expected = expected from _pytest.compat import STRING_TYPES as String
self.abs = abs from decimal import Decimal
self.rel = rel
def __repr__(self): # Delegate the comparison to a class that knows how to deal with the type
return ', '.join(repr(x) for x in self.expected) # of the expected value (e.g. int, float, list, dict, numpy.array, etc).
#
# This architecture is really driven by the need to support numpy arrays.
# The only way to override `==` for arrays without requiring that approx be
# the left operand is to inherit the approx object from `numpy.ndarray`.
# But that can't be a general solution, because it requires (1) numpy to be
# installed and (2) the expected value to be a numpy array. So the general
# solution is to delegate each type of expected value to a different class.
#
# This has the advantage that it made it easy to support mapping types
# (i.e. dict). The old code accepted mapping types, but would only compare
# their keys, which is probably not what most people would expect.
def __eq__(self, actual): if _is_numpy_array(expected):
from collections import Iterable cls = ApproxNumpy
if not isinstance(actual, Iterable): elif isinstance(expected, Mapping):
actual = [actual] cls = ApproxMapping
if len(actual) != len(self.expected): elif isinstance(expected, Sequence) and not isinstance(expected, String):
return False cls = ApproxSequence
return all(a == x for a, x in zip(actual, self.expected)) elif isinstance(expected, Decimal):
cls = ApproxDecimal
__hash__ = None
def __ne__(self, actual):
return not (actual == self)
@property
def expected(self):
# Regardless of whether the user-specified expected value is a number
# or a sequence of numbers, return a list of ApproxNotIterable objects
# that can be compared against.
from collections import Iterable
approx_non_iter = lambda x: ApproxNonIterable(x, self.rel, self.abs)
if isinstance(self._expected, Iterable):
return [approx_non_iter(x) for x in self._expected]
else: else:
return [approx_non_iter(self._expected)] cls = ApproxScalar
@expected.setter return cls(expected, rel, abs, nan_ok)
def expected(self, expected):
self._expected = expected
class ApproxNonIterable(object): def _is_numpy_array(obj):
""" """
Perform approximate comparisons for single numbers only. Return true if the given object is a numpy array. Make a special effort to
avoid importing numpy unless it's really necessary.
In other words, the ``expected`` attribute for objects of this class must
be some sort of number. This is in contrast to the ``approx`` class, where
the ``expected`` attribute can either be a number of a sequence of numbers.
This class is responsible for making comparisons, while ``approx`` is
responsible for abstracting the difference between numbers and sequences of
numbers. Although this class can stand on its own, it's only meant to be
used within ``approx``.
""" """
import inspect
def __init__(self, expected, rel=None, abs=None): for cls in inspect.getmro(type(obj)):
self.expected = expected if cls.__module__ == 'numpy':
self.abs = abs
self.rel = rel
def __repr__(self):
if isinstance(self.expected, complex):
return str(self.expected)
# Infinities aren't compared using tolerances, so don't show a
# tolerance.
if math.isinf(self.expected):
return str(self.expected)
# If a sensible tolerance can't be calculated, self.tolerance will
# raise a ValueError. In this case, display '???'.
try: try:
vetted_tolerance = '{:.1e}'.format(self.tolerance) import numpy as np
except ValueError: return isinstance(obj, np.ndarray)
vetted_tolerance = '???' except ImportError:
pass
if sys.version_info[0] == 2:
return '{0} +- {1}'.format(self.expected, vetted_tolerance)
else:
return u'{0} \u00b1 {1}'.format(self.expected, vetted_tolerance)
def __eq__(self, actual):
# Short-circuit exact equality.
if actual == self.expected:
return True
# Infinity shouldn't be approximately equal to anything but itself, but
# if there's a relative tolerance, it will be infinite and infinity
# will seem approximately equal to everything. The equal-to-itself
# case would have been short circuited above, so here we can just
# return false if the expected value is infinite. The abs() call is
# for compatibility with complex numbers.
if math.isinf(abs(self.expected)):
return False return False
# Return true if the two numbers are within the tolerance.
return abs(self.expected - actual) <= self.tolerance
__hash__ = None
def __ne__(self, actual):
return not (actual == self)
@property
def tolerance(self):
set_default = lambda x, default: x if x is not None else default
# Figure out what the absolute tolerance should be. ``self.abs`` is
# either None or a value specified by the user.
absolute_tolerance = set_default(self.abs, 1e-12)
if absolute_tolerance < 0:
raise ValueError("absolute tolerance can't be negative: {}".format(absolute_tolerance))
if math.isnan(absolute_tolerance):
raise ValueError("absolute tolerance can't be NaN.")
# If the user specified an absolute tolerance but not a relative one,
# just return the absolute tolerance.
if self.rel is None:
if self.abs is not None:
return absolute_tolerance
# Figure out what the relative tolerance should be. ``self.rel`` is
# either None or a value specified by the user. This is done after
# we've made sure the user didn't ask for an absolute tolerance only,
# because we don't want to raise errors about the relative tolerance if
# we aren't even going to use it.
relative_tolerance = set_default(self.rel, 1e-6) * abs(self.expected)
if relative_tolerance < 0:
raise ValueError("relative tolerance can't be negative: {}".format(absolute_tolerance))
if math.isnan(relative_tolerance):
raise ValueError("relative tolerance can't be NaN.")
# Return the larger of the relative and absolute tolerances.
return max(relative_tolerance, absolute_tolerance)
# builtin pytest.raises helper # builtin pytest.raises helper
@ -263,10 +483,13 @@ def raises(expected_exception, *args, **kwargs):
Assert that a code block/function call raises ``expected_exception`` Assert that a code block/function call raises ``expected_exception``
and raise a failure exception otherwise. and raise a failure exception otherwise.
:arg message: if specified, provides a custom failure message if the
exception is not raised
:arg match: if specified, asserts that the exception matches a text or regex
This helper produces a ``ExceptionInfo()`` object (see below). This helper produces a ``ExceptionInfo()`` object (see below).
If using Python 2.5 or above, you may use this function as a You may use this function as a context manager::
context manager::
>>> with raises(ZeroDivisionError): >>> with raises(ZeroDivisionError):
... 1/0 ... 1/0
@ -282,7 +505,6 @@ def raises(expected_exception, *args, **kwargs):
... ...
Failed: Expecting ZeroDivisionError Failed: Expecting ZeroDivisionError
.. note:: .. note::
When using ``pytest.raises`` as a context manager, it's worthwhile to When using ``pytest.raises`` as a context manager, it's worthwhile to
@ -306,7 +528,8 @@ def raises(expected_exception, *args, **kwargs):
... ...
>>> assert exc_info.type == ValueError >>> assert exc_info.type == ValueError
Or you can use the keyword argument ``match`` to assert that the
Since version ``3.1`` you can use the keyword argument ``match`` to assert that the
exception matches a text or regex:: exception matches a text or regex::
>>> with raises(ValueError, match='must be 0 or None'): >>> with raises(ValueError, match='must be 0 or None'):
@ -315,8 +538,12 @@ def raises(expected_exception, *args, **kwargs):
>>> with raises(ValueError, match=r'must be \d+$'): >>> with raises(ValueError, match=r'must be \d+$'):
... raise ValueError("value must be 42") ... raise ValueError("value must be 42")
**Legacy forms**
Or you can specify a callable by passing a to-be-called lambda:: The forms below are fully supported but are discouraged for new code because the
context manager form is regarded as more readable and less error-prone.
It is possible to specify a callable by passing a to-be-called lambda::
>>> raises(ZeroDivisionError, lambda: 1/0) >>> raises(ZeroDivisionError, lambda: 1/0)
<ExceptionInfo ...> <ExceptionInfo ...>
@ -330,13 +557,17 @@ def raises(expected_exception, *args, **kwargs):
>>> raises(ZeroDivisionError, f, x=0) >>> raises(ZeroDivisionError, f, x=0)
<ExceptionInfo ...> <ExceptionInfo ...>
A third possibility is to use a string to be executed:: It is also possible to pass a string to be evaluated at runtime::
>>> raises(ZeroDivisionError, "f(0)") >>> raises(ZeroDivisionError, "f(0)")
<ExceptionInfo ...> <ExceptionInfo ...>
.. autoclass:: _pytest._code.ExceptionInfo The string will be evaluated using the same ``locals()`` and ``globals()``
:members: at the moment of the ``raises`` call.
.. currentmodule:: _pytest._code
Consult the API of ``excinfo`` objects: :class:`ExceptionInfo`.
.. note:: .. note::
Similar to caught exception objects in Python, explicitly clearing Similar to caught exception objects in Python, explicitly clearing
@ -354,14 +585,11 @@ def raises(expected_exception, *args, **kwargs):
""" """
__tracebackhide__ = True __tracebackhide__ = True
base_type = (type, text_type, binary_type)
for exc in filterfalse(isclass, always_iterable(expected_exception, base_type)):
msg = ("exceptions must be old-style classes or" msg = ("exceptions must be old-style classes or"
" derived from BaseException, not %s") " derived from BaseException, not %s")
if isinstance(expected_exception, tuple):
for exc in expected_exception:
if not isclass(exc):
raise TypeError(msg % type(exc)) raise TypeError(msg % type(exc))
elif not isclass(expected_exception):
raise TypeError(msg % type(expected_exception))
message = "DID NOT RAISE {0}".format(expected_exception) message = "DID NOT RAISE {0}".format(expected_exception)
match_expr = None match_expr = None
@ -371,7 +599,10 @@ def raises(expected_exception, *args, **kwargs):
message = kwargs.pop("message") message = kwargs.pop("message")
if "match" in kwargs: if "match" in kwargs:
match_expr = kwargs.pop("match") match_expr = kwargs.pop("match")
message += " matching '{0}'".format(match_expr) if kwargs:
msg = 'Unexpected keyword arguments passed to pytest.raises: '
msg += ', '.join(kwargs.keys())
raise TypeError(msg)
return RaisesContext(expected_exception, message, match_expr) return RaisesContext(expected_exception, message, match_expr)
elif isinstance(args[0], str): elif isinstance(args[0], str):
code, = args code, = args
@ -379,7 +610,7 @@ def raises(expected_exception, *args, **kwargs):
frame = sys._getframe(1) frame = sys._getframe(1)
loc = frame.f_locals.copy() loc = frame.f_locals.copy()
loc.update(kwargs) loc.update(kwargs)
#print "raises frame scope: %r" % frame.f_locals # print "raises frame scope: %r" % frame.f_locals
try: try:
code = _pytest._code.Source(code).compile() code = _pytest._code.Source(code).compile()
py.builtin.exec_(code, frame.f_globals, loc) py.builtin.exec_(code, frame.f_globals, loc)
@ -414,17 +645,10 @@ class RaisesContext(object):
__tracebackhide__ = True __tracebackhide__ = True
if tp[0] is None: if tp[0] is None:
fail(self.message) fail(self.message)
if sys.version_info < (2, 7):
# py26: on __exit__() exc_value often does not contain the
# exception value.
# http://bugs.python.org/issue7853
if not isinstance(tp[1], BaseException):
exc_type, value, traceback = tp
tp = exc_type, exc_type(value), traceback
self.excinfo.__init__(tp) self.excinfo.__init__(tp)
suppress_exception = issubclass(self.excinfo.type, self.expected_exception) suppress_exception = issubclass(self.excinfo.type, self.expected_exception)
if sys.version_info[0] == 2 and suppress_exception: if sys.version_info[0] == 2 and suppress_exception:
sys.exc_clear() sys.exc_clear()
if self.match_expr: if self.match_expr and suppress_exception:
self.excinfo.match(self.match_expr) self.excinfo.match(self.match_expr)
return suppress_exception return suppress_exception

View File

@ -7,15 +7,16 @@ import _pytest._code
import py import py
import sys import sys
import warnings import warnings
import re
from _pytest.fixtures import yield_fixture from _pytest.fixtures import yield_fixture
from _pytest.outcomes import fail
@yield_fixture @yield_fixture
def recwarn(): def recwarn():
"""Return a WarningsRecorder instance that provides these methods: """Return a :class:`WarningsRecorder` instance that records all warnings emitted by test functions.
* ``pop(category=None)``: return last warning matching the category.
* ``clear()``: clear list of warnings
See http://docs.python.org/library/warnings.html for information See http://docs.python.org/library/warnings.html for information
on warning categories. on warning categories.
@ -84,11 +85,11 @@ class _DeprecatedCallContext(object):
def warns(expected_warning, *args, **kwargs): def warns(expected_warning, *args, **kwargs):
"""Assert that code raises a particular class of warning. """Assert that code raises a particular class of warning.
Specifically, the input @expected_warning can be a warning class or Specifically, the parameter ``expected_warning`` can be a warning class or
tuple of warning classes, and the code must return that warning sequence of warning classes, and the inside the ``with`` block must issue a warning of that class or
(if a single class) or one of those warnings (if a tuple). classes.
This helper produces a list of ``warnings.WarningMessage`` objects, This helper produces a list of :class:`warnings.WarningMessage` objects,
one for each warning raised. one for each warning raised.
This function can be used as a context manager, or any of the other ways This function can be used as a context manager, or any of the other ways
@ -96,10 +97,28 @@ def warns(expected_warning, *args, **kwargs):
>>> with warns(RuntimeWarning): >>> with warns(RuntimeWarning):
... warnings.warn("my warning", RuntimeWarning) ... warnings.warn("my warning", RuntimeWarning)
In the context manager form you may use the keyword argument ``match`` to assert
that the exception matches a text or regex::
>>> with warns(UserWarning, match='must be 0 or None'):
... warnings.warn("value must be 0 or None", UserWarning)
>>> with warns(UserWarning, match=r'must be \d+$'):
... warnings.warn("value must be 42", UserWarning)
>>> with warns(UserWarning, match=r'must be \d+$'):
... warnings.warn("this is not here", UserWarning)
Traceback (most recent call last):
...
Failed: DID NOT WARN. No warnings of type ...UserWarning... was emitted...
""" """
wcheck = WarningsChecker(expected_warning) match_expr = None
if not args: if not args:
return wcheck if "match" in kwargs:
match_expr = kwargs.pop("match")
return WarningsChecker(expected_warning, match_expr=match_expr)
elif isinstance(args[0], str): elif isinstance(args[0], str):
code, = args code, = args
assert isinstance(code, str) assert isinstance(code, str)
@ -107,12 +126,12 @@ def warns(expected_warning, *args, **kwargs):
loc = frame.f_locals.copy() loc = frame.f_locals.copy()
loc.update(kwargs) loc.update(kwargs)
with wcheck: with WarningsChecker(expected_warning, match_expr=match_expr):
code = _pytest._code.Source(code).compile() code = _pytest._code.Source(code).compile()
py.builtin.exec_(code, frame.f_globals, loc) py.builtin.exec_(code, frame.f_globals, loc)
else: else:
func = args[0] func = args[0]
with wcheck: with WarningsChecker(expected_warning, match_expr=match_expr):
return func(*args[1:], **kwargs) return func(*args[1:], **kwargs)
@ -172,7 +191,7 @@ class WarningsRecorder(warnings.catch_warnings):
class WarningsChecker(WarningsRecorder): class WarningsChecker(WarningsRecorder):
def __init__(self, expected_warning=None): def __init__(self, expected_warning=None, match_expr=None):
super(WarningsChecker, self).__init__() super(WarningsChecker, self).__init__()
msg = ("exceptions must be old-style classes or " msg = ("exceptions must be old-style classes or "
@ -187,6 +206,7 @@ class WarningsChecker(WarningsRecorder):
raise TypeError(msg % type(expected_warning)) raise TypeError(msg % type(expected_warning))
self.expected_warning = expected_warning self.expected_warning = expected_warning
self.match_expr = match_expr
def __exit__(self, *exc_info): def __exit__(self, *exc_info):
super(WarningsChecker, self).__exit__(*exc_info) super(WarningsChecker, self).__exit__(*exc_info)
@ -197,8 +217,17 @@ class WarningsChecker(WarningsRecorder):
if not any(issubclass(r.category, self.expected_warning) if not any(issubclass(r.category, self.expected_warning)
for r in self): for r in self):
__tracebackhide__ = True __tracebackhide__ = True
from _pytest.runner import fail
fail("DID NOT WARN. No warnings of type {0} was emitted. " fail("DID NOT WARN. No warnings of type {0} was emitted. "
"The list of emitted warnings is: {1}.".format( "The list of emitted warnings is: {1}.".format(
self.expected_warning, self.expected_warning,
[each.message for each in self])) [each.message for each in self]))
elif self.match_expr is not None:
for r in self:
if issubclass(r.category, self.expected_warning):
if re.compile(self.match_expr).search(str(r.message)):
break
else:
fail("DID NOT WARN. No warnings of type {0} matching"
" ('{1}') was emitted. The list of emitted warnings"
" is: {2}.".format(self.expected_warning, self.match_expr,
[each.message for each in self]))

View File

@ -6,12 +6,14 @@ from __future__ import absolute_import, division, print_function
import py import py
import os import os
def pytest_addoption(parser): def pytest_addoption(parser):
group = parser.getgroup("terminal reporting", "resultlog plugin options") group = parser.getgroup("terminal reporting", "resultlog plugin options")
group.addoption('--resultlog', '--result-log', action="store", group.addoption('--resultlog', '--result-log', action="store",
metavar="path", default=None, metavar="path", default=None,
help="DEPRECATED path for machine-readable result log.") help="DEPRECATED path for machine-readable result log.")
def pytest_configure(config): def pytest_configure(config):
resultlog = config.option.resultlog resultlog = config.option.resultlog
# prevent opening resultlog on slave nodes (xdist) # prevent opening resultlog on slave nodes (xdist)
@ -26,6 +28,7 @@ def pytest_configure(config):
from _pytest.deprecated import RESULT_LOG from _pytest.deprecated import RESULT_LOG
config.warn('C1', RESULT_LOG) config.warn('C1', RESULT_LOG)
def pytest_unconfigure(config): def pytest_unconfigure(config):
resultlog = getattr(config, '_resultlog', None) resultlog = getattr(config, '_resultlog', None)
if resultlog: if resultlog:
@ -33,6 +36,7 @@ def pytest_unconfigure(config):
del config._resultlog del config._resultlog
config.pluginmanager.unregister(resultlog) config.pluginmanager.unregister(resultlog)
def generic_path(item): def generic_path(item):
chain = item.listchain() chain = item.listchain()
gpath = [chain[0].name] gpath = [chain[0].name]
@ -56,6 +60,7 @@ def generic_path(item):
fspath = newfspath fspath = newfspath
return ''.join(gpath) return ''.join(gpath)
class ResultLog(object): class ResultLog(object):
def __init__(self, config, logfile): def __init__(self, config, logfile):
self.config = config self.config = config

View File

@ -2,23 +2,25 @@
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import bdb import bdb
import os
import sys import sys
from time import time from time import time
import py import py
from _pytest._code.code import TerminalRepr, ExceptionInfo from _pytest._code.code import TerminalRepr, ExceptionInfo
from _pytest.outcomes import skip, Skipped, TEST_OUTCOME
# #
# pytest plugin hooks # pytest plugin hooks
def pytest_addoption(parser): def pytest_addoption(parser):
group = parser.getgroup("terminal reporting", "reporting", after="general") group = parser.getgroup("terminal reporting", "reporting", after="general")
group.addoption('--durations', group.addoption('--durations',
action="store", type=int, default=None, metavar="N", action="store", type=int, default=None, metavar="N",
help="show N slowest setup/test durations (N=0 for all)."), help="show N slowest setup/test durations (N=0 for all)."),
def pytest_terminal_summary(terminalreporter): def pytest_terminal_summary(terminalreporter):
durations = terminalreporter.config.option.durations durations = terminalreporter.config.option.durations
if durations is None: if durations is None:
@ -44,22 +46,26 @@ def pytest_terminal_summary(terminalreporter):
tr.write_line("%02.2fs %-8s %s" % tr.write_line("%02.2fs %-8s %s" %
(rep.duration, rep.when, nodeid)) (rep.duration, rep.when, nodeid))
def pytest_sessionstart(session): def pytest_sessionstart(session):
session._setupstate = SetupState() session._setupstate = SetupState()
def pytest_sessionfinish(session): def pytest_sessionfinish(session):
session._setupstate.teardown_all() session._setupstate.teardown_all()
class NodeInfo:
def __init__(self, location):
self.location = location
def pytest_runtest_protocol(item, nextitem): def pytest_runtest_protocol(item, nextitem):
item.ihook.pytest_runtest_logstart( item.ihook.pytest_runtest_logstart(
nodeid=item.nodeid, location=item.location, nodeid=item.nodeid, location=item.location,
) )
runtestprotocol(item, nextitem=nextitem) runtestprotocol(item, nextitem=nextitem)
item.ihook.pytest_runtest_logfinish(
nodeid=item.nodeid, location=item.location,
)
return True return True
def runtestprotocol(item, log=True, nextitem=None): def runtestprotocol(item, log=True, nextitem=None):
hasrequest = hasattr(item, "_request") hasrequest = hasattr(item, "_request")
if hasrequest and not item._request: if hasrequest and not item._request:
@ -80,6 +86,7 @@ def runtestprotocol(item, log=True, nextitem=None):
item.funcargs = None item.funcargs = None
return reports return reports
def show_test_item(item): def show_test_item(item):
"""Show test function, parameters and the fixtures of the test item.""" """Show test function, parameters and the fixtures of the test item."""
tw = item.config.get_terminal_writer() tw = item.config.get_terminal_writer()
@ -90,10 +97,14 @@ def show_test_item(item):
if used_fixtures: if used_fixtures:
tw.write(' (fixtures used: {0})'.format(', '.join(used_fixtures))) tw.write(' (fixtures used: {0})'.format(', '.join(used_fixtures)))
def pytest_runtest_setup(item): def pytest_runtest_setup(item):
_update_current_test_var(item, 'setup')
item.session._setupstate.prepare(item) item.session._setupstate.prepare(item)
def pytest_runtest_call(item): def pytest_runtest_call(item):
_update_current_test_var(item, 'call')
try: try:
item.runtest() item.runtest()
except Exception: except Exception:
@ -106,8 +117,28 @@ def pytest_runtest_call(item):
del tb # Get rid of it in this namespace del tb # Get rid of it in this namespace
raise raise
def pytest_runtest_teardown(item, nextitem): def pytest_runtest_teardown(item, nextitem):
_update_current_test_var(item, 'teardown')
item.session._setupstate.teardown_exact(item, nextitem) item.session._setupstate.teardown_exact(item, nextitem)
_update_current_test_var(item, None)
def _update_current_test_var(item, when):
"""
Update PYTEST_CURRENT_TEST to reflect the current item and stage.
If ``when`` is None, delete PYTEST_CURRENT_TEST from the environment.
"""
var_name = 'PYTEST_CURRENT_TEST'
if when:
value = '{0} ({1})'.format(item.nodeid, when)
# don't allow null bytes on environment variables (see #2644, #2957)
value = value.replace('\x00', '(null)')
os.environ[var_name] = value
else:
os.environ.pop(var_name)
def pytest_report_teststatus(report): def pytest_report_teststatus(report):
if report.when in ("setup", "teardown"): if report.when in ("setup", "teardown"):
@ -133,21 +164,25 @@ def call_and_report(item, when, log=True, **kwds):
hook.pytest_exception_interact(node=item, call=call, report=report) hook.pytest_exception_interact(node=item, call=call, report=report)
return report return report
def check_interactive_exception(call, report): def check_interactive_exception(call, report):
return call.excinfo and not ( return call.excinfo and not (
hasattr(report, "wasxfail") or hasattr(report, "wasxfail") or
call.excinfo.errisinstance(skip.Exception) or call.excinfo.errisinstance(skip.Exception) or
call.excinfo.errisinstance(bdb.BdbQuit)) call.excinfo.errisinstance(bdb.BdbQuit))
def call_runtest_hook(item, when, **kwds): def call_runtest_hook(item, when, **kwds):
hookname = "pytest_runtest_" + when hookname = "pytest_runtest_" + when
ihook = getattr(item.ihook, hookname) ihook = getattr(item.ihook, hookname)
return CallInfo(lambda: ihook(item=item, **kwds), when=when) return CallInfo(lambda: ihook(item=item, **kwds), when=when)
class CallInfo:
class CallInfo(object):
""" Result/Exception info a function invocation. """ """ Result/Exception info a function invocation. """
#: None or ExceptionInfo object. #: None or ExceptionInfo object.
excinfo = None excinfo = None
def __init__(self, func, when): def __init__(self, func, when):
#: context of invocation: one of "setup", "call", #: context of invocation: one of "setup", "call",
#: "teardown", "memocollect" #: "teardown", "memocollect"
@ -158,7 +193,7 @@ class CallInfo:
except KeyboardInterrupt: except KeyboardInterrupt:
self.stop = time() self.stop = time()
raise raise
except: except: # noqa
self.excinfo = ExceptionInfo() self.excinfo = ExceptionInfo()
self.stop = time() self.stop = time()
@ -169,6 +204,7 @@ class CallInfo:
status = "result: %r" % (self.result,) status = "result: %r" % (self.result,)
return "<CallInfo when=%r %s>" % (self.when, status) return "<CallInfo when=%r %s>" % (self.when, status)
def getslaveinfoline(node): def getslaveinfoline(node):
try: try:
return node._slaveinfocache return node._slaveinfocache
@ -179,6 +215,7 @@ def getslaveinfoline(node):
d['id'], d['sysplatform'], ver, d['executable']) d['id'], d['sysplatform'], ver, d['executable'])
return s return s
class BaseReport(object): class BaseReport(object):
def __init__(self, **kw): def __init__(self, **kw):
@ -219,6 +256,14 @@ class BaseReport(object):
exc = tw.stringio.getvalue() exc = tw.stringio.getvalue()
return exc.strip() return exc.strip()
@property
def caplog(self):
"""Return captured log lines, if log capturing is enabled
.. versionadded:: 3.5
"""
return '\n'.join(content for (prefix, content) in self.get_sections('Captured log'))
@property @property
def capstdout(self): def capstdout(self):
"""Return captured text from stdout, if capturing is enabled """Return captured text from stdout, if capturing is enabled
@ -243,10 +288,11 @@ class BaseReport(object):
def fspath(self): def fspath(self):
return self.nodeid.split("::")[0] return self.nodeid.split("::")[0]
def pytest_runtest_makereport(item, call): def pytest_runtest_makereport(item, call):
when = call.when when = call.when
duration = call.stop-call.start duration = call.stop - call.start
keywords = dict([(x,1) for x in item.keywords]) keywords = dict([(x, 1) for x in item.keywords])
excinfo = call.excinfo excinfo = call.excinfo
sections = [] sections = []
if not call.excinfo: if not call.excinfo:
@ -268,17 +314,19 @@ def pytest_runtest_makereport(item, call):
longrepr = item._repr_failure_py(excinfo, longrepr = item._repr_failure_py(excinfo,
style=item.config.option.tbstyle) style=item.config.option.tbstyle)
for rwhen, key, content in item._report_sections: for rwhen, key, content in item._report_sections:
sections.append(("Captured %s %s" %(key, rwhen), content)) sections.append(("Captured %s %s" % (key, rwhen), content))
return TestReport(item.nodeid, item.location, return TestReport(item.nodeid, item.location,
keywords, outcome, longrepr, when, keywords, outcome, longrepr, when,
sections, duration) sections, duration, user_properties=item.user_properties)
class TestReport(BaseReport): class TestReport(BaseReport):
""" Basic test report object (also used for setup and teardown calls if """ Basic test report object (also used for setup and teardown calls if
they fail). they fail).
""" """
def __init__(self, nodeid, location, keywords, outcome, def __init__(self, nodeid, location, keywords, outcome,
longrepr, when, sections=(), duration=0, **extra): longrepr, when, sections=(), duration=0, user_properties=(), **extra):
#: normalized collection node id #: normalized collection node id
self.nodeid = nodeid self.nodeid = nodeid
@ -300,6 +348,10 @@ class TestReport(BaseReport):
#: one of 'setup', 'call', 'teardown' to indicate runtest phase. #: one of 'setup', 'call', 'teardown' to indicate runtest phase.
self.when = when self.when = when
#: user properties is a list of tuples (name, value) that holds user
#: defined properties of the test
self.user_properties = user_properties
#: list of pairs ``(str, str)`` of extra information which needs to #: list of pairs ``(str, str)`` of extra information which needs to
#: marshallable. Used by pytest to add captured text #: marshallable. Used by pytest to add captured text
#: from ``stdout`` and ``stderr``, but may be used by other plugins #: from ``stdout`` and ``stderr``, but may be used by other plugins
@ -315,14 +367,17 @@ class TestReport(BaseReport):
return "<TestReport %r when=%r outcome=%r>" % ( return "<TestReport %r when=%r outcome=%r>" % (
self.nodeid, self.when, self.outcome) self.nodeid, self.when, self.outcome)
class TeardownErrorReport(BaseReport): class TeardownErrorReport(BaseReport):
outcome = "failed" outcome = "failed"
when = "teardown" when = "teardown"
def __init__(self, longrepr, **extra): def __init__(self, longrepr, **extra):
self.longrepr = longrepr self.longrepr = longrepr
self.sections = [] self.sections = []
self.__dict__.update(extra) self.__dict__.update(extra)
def pytest_make_collect_report(collector): def pytest_make_collect_report(collector):
call = CallInfo( call = CallInfo(
lambda: list(collector.collect()), lambda: list(collector.collect()),
@ -367,14 +422,18 @@ class CollectReport(BaseReport):
return "<CollectReport %r lenresult=%s outcome=%r>" % ( return "<CollectReport %r lenresult=%s outcome=%r>" % (
self.nodeid, len(self.result), self.outcome) self.nodeid, len(self.result), self.outcome)
class CollectErrorRepr(TerminalRepr): class CollectErrorRepr(TerminalRepr):
def __init__(self, msg): def __init__(self, msg):
self.longrepr = msg self.longrepr = msg
def toterminal(self, out): def toterminal(self, out):
out.line(self.longrepr, red=True) out.line(self.longrepr, red=True)
class SetupState(object): class SetupState(object):
""" shared state for setting up/tearing down test items or collectors. """ """ shared state for setting up/tearing down test items or collectors. """
def __init__(self): def __init__(self):
self.stack = [] self.stack = []
self._finalizers = {} self._finalizers = {}
@ -385,8 +444,8 @@ class SetupState(object):
is called at the end of teardown_all(). is called at the end of teardown_all().
""" """
assert colitem and not isinstance(colitem, tuple) assert colitem and not isinstance(colitem, tuple)
assert py.builtin.callable(finalizer) assert callable(finalizer)
#assert colitem in self.stack # some unit tests don't setup stack :/ # assert colitem in self.stack # some unit tests don't setup stack :/
self._finalizers.setdefault(colitem, []).append(finalizer) self._finalizers.setdefault(colitem, []).append(finalizer)
def _pop_and_teardown(self): def _pop_and_teardown(self):
@ -400,7 +459,7 @@ class SetupState(object):
fin = finalizers.pop() fin = finalizers.pop()
try: try:
fin() fin()
except Exception: except TEST_OUTCOME:
# XXX Only first exception will be seen by user, # XXX Only first exception will be seen by user,
# ideally all should be reported. # ideally all should be reported.
if exc is None: if exc is None:
@ -447,10 +506,11 @@ class SetupState(object):
self.stack.append(col) self.stack.append(col)
try: try:
col.setup() col.setup()
except Exception: except TEST_OUTCOME:
col._prepare_exc = sys.exc_info() col._prepare_exc = sys.exc_info()
raise raise
def collect_one_node(collector): def collect_one_node(collector):
ihook = collector.ihook ihook = collector.ihook
ihook.pytest_collectstart(collector=collector) ihook.pytest_collectstart(collector=collector)
@ -459,122 +519,3 @@ def collect_one_node(collector):
if call and check_interactive_exception(call, rep): if call and check_interactive_exception(call, rep):
ihook.pytest_exception_interact(node=collector, call=call, report=rep) ihook.pytest_exception_interact(node=collector, call=call, report=rep)
return rep return rep
# =============================================================
# Test OutcomeExceptions and helpers for creating them.
class OutcomeException(Exception):
""" OutcomeException and its subclass instances indicate and
contain info about test and collection outcomes.
"""
def __init__(self, msg=None, pytrace=True):
Exception.__init__(self, msg)
self.msg = msg
self.pytrace = pytrace
def __repr__(self):
if self.msg:
val = self.msg
if isinstance(val, bytes):
val = py._builtin._totext(val, errors='replace')
return val
return "<%s instance>" %(self.__class__.__name__,)
__str__ = __repr__
class Skipped(OutcomeException):
# XXX hackish: on 3k we fake to live in the builtins
# in order to have Skipped exception printing shorter/nicer
__module__ = 'builtins'
def __init__(self, msg=None, pytrace=True, allow_module_level=False):
OutcomeException.__init__(self, msg=msg, pytrace=pytrace)
self.allow_module_level = allow_module_level
class Failed(OutcomeException):
""" raised from an explicit call to pytest.fail() """
__module__ = 'builtins'
class Exit(KeyboardInterrupt):
""" raised for immediate program exits (no tracebacks/summaries)"""
def __init__(self, msg="unknown reason"):
self.msg = msg
KeyboardInterrupt.__init__(self, msg)
# exposed helper methods
def exit(msg):
""" exit testing process as if KeyboardInterrupt was triggered. """
__tracebackhide__ = True
raise Exit(msg)
exit.Exception = Exit
def skip(msg=""):
""" skip an executing test with the given message. Note: it's usually
better to use the pytest.mark.skipif marker to declare a test to be
skipped under certain conditions like mismatching platforms or
dependencies. See the pytest_skipping plugin for details.
"""
__tracebackhide__ = True
raise Skipped(msg=msg)
skip.Exception = Skipped
def fail(msg="", pytrace=True):
""" explicitly fail an currently-executing test with the given Message.
:arg pytrace: if false the msg represents the full failure information
and no python traceback will be reported.
"""
__tracebackhide__ = True
raise Failed(msg=msg, pytrace=pytrace)
fail.Exception = Failed
def importorskip(modname, minversion=None):
""" return imported module if it has at least "minversion" as its
__version__ attribute. If no minversion is specified the a skip
is only triggered if the module can not be imported.
"""
import warnings
__tracebackhide__ = True
compile(modname, '', 'eval') # to catch syntaxerrors
should_skip = False
with warnings.catch_warnings():
# make sure to ignore ImportWarnings that might happen because
# of existing directories with the same name we're trying to
# import but without a __init__.py file
warnings.simplefilter('ignore')
try:
__import__(modname)
except ImportError:
# Do not raise chained exception here(#1485)
should_skip = True
if should_skip:
raise Skipped("could not import %r" %(modname,), allow_module_level=True)
mod = sys.modules[modname]
if minversion is None:
return mod
verattr = getattr(mod, '__version__', None)
if minversion is not None:
try:
from pkg_resources import parse_version as pv
except ImportError:
raise Skipped("we have a required version for %r but can not import "
"pkg_resources to parse version strings." % (modname,),
allow_module_level=True)
if verattr is None or pv(verattr) < pv(minversion):
raise Skipped("module %r has __version__ %r, required is: %r" %(
modname, verattr, minversion), allow_module_level=True)
return mod

View File

@ -44,7 +44,7 @@ def _show_fixture_action(fixturedef, msg):
config = fixturedef._fixturemanager.config config = fixturedef._fixturemanager.config
capman = config.pluginmanager.getplugin('capturemanager') capman = config.pluginmanager.getplugin('capturemanager')
if capman: if capman:
out, err = capman.suspendcapture() out, err = capman.suspend_global_capture()
tw = config.get_terminal_writer() tw = config.get_terminal_writer()
tw.line() tw.line()
@ -63,7 +63,7 @@ def _show_fixture_action(fixturedef, msg):
tw.write('[{0}]'.format(fixturedef.cached_param)) tw.write('[{0}]'.format(fixturedef.cached_param))
if capman: if capman:
capman.resumecapture() capman.resume_global_capture()
sys.stdout.write(out) sys.stdout.write(out)
sys.stderr.write(err) sys.stderr.write(err)

View File

@ -1,14 +1,10 @@
""" support for skip/xfail functions and markers. """ """ support for skip/xfail functions and markers. """
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import os
import sys
import traceback
import py
from _pytest.config import hookimpl from _pytest.config import hookimpl
from _pytest.mark import MarkInfo, MarkDecorator from _pytest.mark.evaluate import MarkEvaluator
from _pytest.runner import fail, skip from _pytest.outcomes import fail, skip, xfail
def pytest_addoption(parser): def pytest_addoption(parser):
group = parser.getgroup("general") group = parser.getgroup("general")
@ -16,9 +12,9 @@ def pytest_addoption(parser):
action="store_true", dest="runxfail", default=False, action="store_true", dest="runxfail", default=False,
help="run tests even if they are marked xfail") help="run tests even if they are marked xfail")
parser.addini("xfail_strict", "default for the strict parameter of xfail " parser.addini("xfail_strict",
"markers when not given explicitly (default: " "default for the strict parameter of xfail "
"False)", "markers when not given explicitly (default: False)",
default=False, default=False,
type="bool") type="bool")
@ -33,7 +29,7 @@ def pytest_configure(config):
def nop(*args, **kwargs): def nop(*args, **kwargs):
pass pass
nop.Exception = XFailed nop.Exception = xfail.Exception
setattr(pytest, "xfail", nop) setattr(pytest, "xfail", nop)
config.addinivalue_line("markers", config.addinivalue_line("markers",
@ -59,125 +55,19 @@ def pytest_configure(config):
) )
class XFailed(fail.Exception):
""" raised from an explicit call to pytest.xfail() """
def xfail(reason=""):
""" xfail an executing test or setup functions with the given reason."""
__tracebackhide__ = True
raise XFailed(reason)
xfail.Exception = XFailed
class MarkEvaluator:
def __init__(self, item, name):
self.item = item
self.name = name
@property
def holder(self):
return self.item.keywords.get(self.name)
def __bool__(self):
return bool(self.holder)
__nonzero__ = __bool__
def wasvalid(self):
return not hasattr(self, 'exc')
def invalidraise(self, exc):
raises = self.get('raises')
if not raises:
return
return not isinstance(exc, raises)
def istrue(self):
try:
return self._istrue()
except Exception:
self.exc = sys.exc_info()
if isinstance(self.exc[1], SyntaxError):
msg = [" " * (self.exc[1].offset + 4) + "^", ]
msg.append("SyntaxError: invalid syntax")
else:
msg = traceback.format_exception_only(*self.exc[:2])
fail("Error evaluating %r expression\n"
" %s\n"
"%s"
% (self.name, self.expr, "\n".join(msg)),
pytrace=False)
def _getglobals(self):
d = {'os': os, 'sys': sys, 'config': self.item.config}
if hasattr(self.item, 'obj'):
d.update(self.item.obj.__globals__)
return d
def _istrue(self):
if hasattr(self, 'result'):
return self.result
if self.holder:
if self.holder.args or 'condition' in self.holder.kwargs:
self.result = False
# "holder" might be a MarkInfo or a MarkDecorator; only
# MarkInfo keeps track of all parameters it received in an
# _arglist attribute
marks = getattr(self.holder, '_marks', None) \
or [self.holder.mark]
for _, args, kwargs in marks:
if 'condition' in kwargs:
args = (kwargs['condition'],)
for expr in args:
self.expr = expr
if isinstance(expr, py.builtin._basestring):
d = self._getglobals()
result = cached_eval(self.item.config, expr, d)
else:
if "reason" not in kwargs:
# XXX better be checked at collection time
msg = "you need to specify reason=STRING " \
"when using booleans as conditions."
fail(msg)
result = bool(expr)
if result:
self.result = True
self.reason = kwargs.get('reason', None)
self.expr = expr
return self.result
else:
self.result = True
return getattr(self, 'result', False)
def get(self, attr, default=None):
return self.holder.kwargs.get(attr, default)
def getexplanation(self):
expl = getattr(self, 'reason', None) or self.get('reason', None)
if not expl:
if not hasattr(self, 'expr'):
return ""
else:
return "condition: " + str(self.expr)
return expl
@hookimpl(tryfirst=True) @hookimpl(tryfirst=True)
def pytest_runtest_setup(item): def pytest_runtest_setup(item):
# Check if skip or skipif are specified as pytest marks # Check if skip or skipif are specified as pytest marks
item._skipped_by_mark = False
skipif_info = item.keywords.get('skipif')
if isinstance(skipif_info, (MarkInfo, MarkDecorator)):
eval_skipif = MarkEvaluator(item, 'skipif') eval_skipif = MarkEvaluator(item, 'skipif')
if eval_skipif.istrue(): if eval_skipif.istrue():
item._evalskip = eval_skipif item._skipped_by_mark = True
skip(eval_skipif.getexplanation()) skip(eval_skipif.getexplanation())
skip_info = item.keywords.get('skip') for skip_info in item.iter_markers():
if isinstance(skip_info, (MarkInfo, MarkDecorator)): if skip_info.name != 'skip':
item._evalskip = True continue
item._skipped_by_mark = True
if 'reason' in skip_info.kwargs: if 'reason' in skip_info.kwargs:
skip(skip_info.kwargs['reason']) skip(skip_info.kwargs['reason'])
elif skip_info.args: elif skip_info.args:
@ -224,7 +114,6 @@ def pytest_runtest_makereport(item, call):
outcome = yield outcome = yield
rep = outcome.get_result() rep = outcome.get_result()
evalxfail = getattr(item, '_evalxfail', None) evalxfail = getattr(item, '_evalxfail', None)
evalskip = getattr(item, '_evalskip', None)
# unitttest special case, see setting of _unexpectedsuccess # unitttest special case, see setting of _unexpectedsuccess
if hasattr(item, '_unexpectedsuccess') and rep.when == "call": if hasattr(item, '_unexpectedsuccess') and rep.when == "call":
from _pytest.compat import _is_unittest_unexpected_success_a_failure from _pytest.compat import _is_unittest_unexpected_success_a_failure
@ -260,7 +149,7 @@ def pytest_runtest_makereport(item, call):
else: else:
rep.outcome = "passed" rep.outcome = "passed"
rep.wasxfail = explanation rep.wasxfail = explanation
elif evalskip is not None and rep.skipped and type(rep.longrepr) is tuple: elif getattr(item, '_skipped_by_mark', False) and rep.skipped and type(rep.longrepr) is tuple:
# skipped by mark.skipif; change the location of the failure # skipped by mark.skipif; change the location of the failure
# to point to the item definition, otherwise it will display # to point to the item definition, otherwise it will display
# the location of where the skip exception was raised within pytest # the location of where the skip exception was raised within pytest
@ -268,7 +157,10 @@ def pytest_runtest_makereport(item, call):
filename, line = item.location[:2] filename, line = item.location[:2]
rep.longrepr = filename, line, reason rep.longrepr = filename, line, reason
# called by terminalreporter progress reporting # called by terminalreporter progress reporting
def pytest_report_teststatus(report): def pytest_report_teststatus(report):
if hasattr(report, "wasxfail"): if hasattr(report, "wasxfail"):
if report.skipped: if report.skipped:
@ -276,11 +168,14 @@ def pytest_report_teststatus(report):
elif report.passed: elif report.passed:
return "xpassed", "X", ("XPASS", {'yellow': True}) return "xpassed", "X", ("XPASS", {'yellow': True})
# called by the terminalreporter instance/plugin # called by the terminalreporter instance/plugin
def pytest_terminal_summary(terminalreporter): def pytest_terminal_summary(terminalreporter):
tr = terminalreporter tr = terminalreporter
if not tr.reportchars: if not tr.reportchars:
#for name in "xfailed skipped failed xpassed": # for name in "xfailed skipped failed xpassed":
# if not tr.stats.get(name, 0): # if not tr.stats.get(name, 0):
# tr.write_line("HINT: use '-r' option to see extra " # tr.write_line("HINT: use '-r' option to see extra "
# "summary info about tests") # "summary info about tests")
@ -289,18 +184,8 @@ def pytest_terminal_summary(terminalreporter):
lines = [] lines = []
for char in tr.reportchars: for char in tr.reportchars:
if char == "x": action = REPORTCHAR_ACTIONS.get(char, lambda tr, lines: None)
show_xfailed(terminalreporter, lines) action(terminalreporter, lines)
elif char == "X":
show_xpassed(terminalreporter, lines)
elif char in "fF":
show_simple(terminalreporter, lines, 'failed', "FAIL %s")
elif char in "sS":
show_skipped(terminalreporter, lines)
elif char == "E":
show_simple(terminalreporter, lines, 'error', "ERROR %s")
elif char == 'p':
show_simple(terminalreporter, lines, 'passed', "PASSED %s")
if lines: if lines:
tr._tw.sep("=", "short test summary info") tr._tw.sep("=", "short test summary info")
@ -336,45 +221,65 @@ def show_xpassed(terminalreporter, lines):
lines.append("XPASS %s %s" % (pos, reason)) lines.append("XPASS %s %s" % (pos, reason))
def cached_eval(config, expr, d):
if not hasattr(config, '_evalcache'):
config._evalcache = {}
try:
return config._evalcache[expr]
except KeyError:
import _pytest._code
exprcode = _pytest._code.compile(expr, mode="eval")
config._evalcache[expr] = x = eval(exprcode, d)
return x
def folded_skips(skipped): def folded_skips(skipped):
d = {} d = {}
for event in skipped: for event in skipped:
key = event.longrepr key = event.longrepr
assert len(key) == 3, (event, key) assert len(key) == 3, (event, key)
keywords = getattr(event, 'keywords', {})
# folding reports with global pytestmark variable
# this is workaround, because for now we cannot identify the scope of a skip marker
# TODO: revisit after marks scope would be fixed
when = getattr(event, 'when', None)
if when == 'setup' and 'skip' in keywords and 'pytestmark' not in keywords:
key = (key[0], None, key[2])
d.setdefault(key, []).append(event) d.setdefault(key, []).append(event)
l = [] values = []
for key, events in d.items(): for key, events in d.items():
l.append((len(events),) + key) values.append((len(events),) + key)
return l return values
def show_skipped(terminalreporter, lines): def show_skipped(terminalreporter, lines):
tr = terminalreporter tr = terminalreporter
skipped = tr.stats.get('skipped', []) skipped = tr.stats.get('skipped', [])
if skipped: if skipped:
#if not tr.hasopt('skipped'): # if not tr.hasopt('skipped'):
# tr.write_line( # tr.write_line(
# "%d skipped tests, specify -rs for more info" % # "%d skipped tests, specify -rs for more info" %
# len(skipped)) # len(skipped))
# return # return
fskips = folded_skips(skipped) fskips = folded_skips(skipped)
if fskips: if fskips:
#tr.write_sep("_", "skipped test summary") # tr.write_sep("_", "skipped test summary")
for num, fspath, lineno, reason in fskips: for num, fspath, lineno, reason in fskips:
if reason.startswith("Skipped: "): if reason.startswith("Skipped: "):
reason = reason[9:] reason = reason[9:]
if lineno is not None:
lines.append( lines.append(
"SKIP [%d] %s:%d: %s" % "SKIP [%d] %s:%d: %s" %
(num, fspath, lineno, reason)) (num, fspath, lineno + 1, reason))
else:
lines.append(
"SKIP [%d] %s: %s" %
(num, fspath, reason))
def shower(stat, format):
def show_(terminalreporter, lines):
return show_simple(terminalreporter, lines, stat, format)
return show_
REPORTCHAR_ACTIONS = {
'x': show_xfailed,
'X': show_xpassed,
'f': shower('failed', "FAIL %s"),
'F': shower('failed', "FAIL %s"),
's': show_skipped,
'S': show_skipped,
'p': shower('passed', "PASSED %s"),
'E': shower('error', "ERROR %s")
}

View File

@ -5,23 +5,60 @@ This is a good source for looking at the various reporting hooks.
from __future__ import absolute_import, division, print_function from __future__ import absolute_import, division, print_function
import itertools import itertools
from _pytest.main import EXIT_OK, EXIT_TESTSFAILED, EXIT_INTERRUPTED, \ import platform
EXIT_USAGEERROR, EXIT_NOTESTSCOLLECTED
import pytest
import py
import sys import sys
import time import time
import platform
import _pytest._pluggy as pluggy import pluggy
import py
import six
from more_itertools import collapse
import pytest
from _pytest import nodes
from _pytest.main import EXIT_OK, EXIT_TESTSFAILED, EXIT_INTERRUPTED, \
EXIT_USAGEERROR, EXIT_NOTESTSCOLLECTED
import argparse
class MoreQuietAction(argparse.Action):
"""
a modified copy of the argparse count action which counts down and updates
the legacy quiet attribute at the same time
used to unify verbosity handling
"""
def __init__(self,
option_strings,
dest,
default=None,
required=False,
help=None):
super(MoreQuietAction, self).__init__(
option_strings=option_strings,
dest=dest,
nargs=0,
default=default,
required=required,
help=help)
def __call__(self, parser, namespace, values, option_string=None):
new_count = getattr(namespace, self.dest, 0) - 1
setattr(namespace, self.dest, new_count)
# todo Deprecate config.quiet
namespace.quiet = getattr(namespace, 'quiet', 0) + 1
def pytest_addoption(parser): def pytest_addoption(parser):
group = parser.getgroup("terminal reporting", "reporting", after="general") group = parser.getgroup("terminal reporting", "reporting", after="general")
group._addoption('-v', '--verbose', action="count", group._addoption('-v', '--verbose', action="count", default=0,
dest="verbose", default=0, help="increase verbosity."), dest="verbose", help="increase verbosity."),
group._addoption('-q', '--quiet', action="count", group._addoption('-q', '--quiet', action=MoreQuietAction, default=0,
dest="quiet", default=0, help="decrease verbosity."), dest="verbose", help="decrease verbosity."),
group._addoption("--verbosity", dest='verbose', type=int, default=0,
help="set verbosity")
group._addoption('-r', group._addoption('-r',
action="store", dest="reportchars", default='', metavar="chars", action="store", dest="reportchars", default='', metavar="chars",
help="show extra test summary info as specified by chars (f)ailed, " help="show extra test summary info as specified by chars (f)ailed, "
@ -39,6 +76,11 @@ def pytest_addoption(parser):
action="store", dest="tbstyle", default='auto', action="store", dest="tbstyle", default='auto',
choices=['auto', 'long', 'short', 'no', 'line', 'native'], choices=['auto', 'long', 'short', 'no', 'line', 'native'],
help="traceback print mode (auto/long/short/line/native/no).") help="traceback print mode (auto/long/short/line/native/no).")
group._addoption('--show-capture',
action="store", dest="showcapture",
choices=['no', 'stdout', 'stderr', 'log', 'all'], default='all',
help="Controls how captured stdout/stderr/log is shown on failed tests. "
"Default is 'all'.")
group._addoption('--fulltrace', '--full-trace', group._addoption('--fulltrace', '--full-trace',
action="store_true", default=False, action="store_true", default=False,
help="don't cut any tracebacks (default is to cut).") help="don't cut any tracebacks (default is to cut).")
@ -47,8 +89,12 @@ def pytest_addoption(parser):
choices=['yes', 'no', 'auto'], choices=['yes', 'no', 'auto'],
help="color terminal output (yes/no/auto).") help="color terminal output (yes/no/auto).")
parser.addini("console_output_style",
help="console output: classic or with additional progress information (classic|progress).",
default='progress')
def pytest_configure(config): def pytest_configure(config):
config.option.verbose -= config.option.quiet
reporter = TerminalReporter(config, sys.stdout) reporter = TerminalReporter(config, sys.stdout)
config.pluginmanager.register(reporter, 'terminalreporter') config.pluginmanager.register(reporter, 'terminalreporter')
if config.option.debug or config.option.traceconfig: if config.option.debug or config.option.traceconfig:
@ -57,6 +103,7 @@ def pytest_configure(config):
reporter.write_line("[traceconfig] " + msg) reporter.write_line("[traceconfig] " + msg)
config.trace.root.setprocessor("pytest:config", mywriter) config.trace.root.setprocessor("pytest:config", mywriter)
def getreportopt(config): def getreportopt(config):
reportopts = "" reportopts = ""
reportchars = config.option.reportchars reportchars = config.option.reportchars
@ -72,6 +119,7 @@ def getreportopt(config):
reportopts = 'fEsxXw' reportopts = 'fEsxXw'
return reportopts return reportopts
def pytest_report_teststatus(report): def pytest_report_teststatus(report):
if report.passed: if report.passed:
letter = "." letter = "."
@ -84,10 +132,11 @@ def pytest_report_teststatus(report):
return report.outcome, letter, report.outcome.upper() return report.outcome, letter, report.outcome.upper()
class WarningReport: class WarningReport(object):
""" """
Simple structure to hold warnings information captured by ``pytest_logwarning``. Simple structure to hold warnings information captured by ``pytest_logwarning``.
""" """
def __init__(self, code, message, nodeid=None, fslocation=None): def __init__(self, code, message, nodeid=None, fslocation=None):
""" """
:param code: unused :param code: unused
@ -118,7 +167,7 @@ class WarningReport:
return None return None
class TerminalReporter: class TerminalReporter(object):
def __init__(self, config, file=None): def __init__(self, config, file=None):
import _pytest.config import _pytest.config
self.config = config self.config = config
@ -127,17 +176,32 @@ class TerminalReporter:
self.showfspath = self.verbosity >= 0 self.showfspath = self.verbosity >= 0
self.showlongtestinfo = self.verbosity > 0 self.showlongtestinfo = self.verbosity > 0
self._numcollected = 0 self._numcollected = 0
self._session = None
self.stats = {} self.stats = {}
self.startdir = py.path.local() self.startdir = py.path.local()
if file is None: if file is None:
file = sys.stdout file = sys.stdout
self._tw = self.writer = _pytest.config.create_terminal_writer(config, self._tw = _pytest.config.create_terminal_writer(config, file)
file) # self.writer will be deprecated in pytest-3.4
self.writer = self._tw
self._screen_width = self._tw.fullwidth
self.currentfspath = None self.currentfspath = None
self.reportchars = getreportopt(config) self.reportchars = getreportopt(config)
self.hasmarkup = self._tw.hasmarkup self.hasmarkup = self._tw.hasmarkup
self.isatty = file.isatty() self.isatty = file.isatty()
self._progress_nodeids_reported = set()
self._show_progress_info = self._determine_show_progress_info()
def _determine_show_progress_info(self):
"""Return True if we should display progress information based on the current config"""
# do not show progress if we are not capturing output (#3038)
if self.config.getoption('capture') == 'no':
return False
# do not show progress if we are showing fixture setup/teardown
if self.config.getoption('setupshow'):
return False
return self.config.getini('console_output_style') == 'progress'
def hasopt(self, char): def hasopt(self, char):
char = {'xfailed': 'x', 'skipped': 's'}.get(char, char) char = {'xfailed': 'x', 'skipped': 's'}.get(char, char)
@ -146,6 +210,8 @@ class TerminalReporter:
def write_fspath_result(self, nodeid, res): def write_fspath_result(self, nodeid, res):
fspath = self.config.rootdir.join(nodeid.split("::")[0]) fspath = self.config.rootdir.join(nodeid.split("::")[0])
if fspath != self.currentfspath: if fspath != self.currentfspath:
if self.currentfspath is not None:
self._write_progress_information_filling_space()
self.currentfspath = fspath self.currentfspath = fspath
fspath = self.startdir.bestrelpath(fspath) fspath = self.startdir.bestrelpath(fspath)
self._tw.line() self._tw.line()
@ -170,14 +236,28 @@ class TerminalReporter:
self._tw.write(content, **markup) self._tw.write(content, **markup)
def write_line(self, line, **markup): def write_line(self, line, **markup):
if not py.builtin._istext(line): if not isinstance(line, six.text_type):
line = py.builtin.text(line, errors="replace") line = six.text_type(line, errors="replace")
self.ensure_newline() self.ensure_newline()
self._tw.line(line, **markup) self._tw.line(line, **markup)
def rewrite(self, line, **markup): def rewrite(self, line, **markup):
"""
Rewinds the terminal cursor to the beginning and writes the given line.
:kwarg erase: if True, will also add spaces until the full terminal width to ensure
previous lines are properly erased.
The rest of the keyword arguments are markup instructions.
"""
erase = markup.pop('erase', False)
if erase:
fill_count = self._tw.fullwidth - len(line) - 1
fill = ' ' * fill_count
else:
fill = ''
line = str(line) line = str(line)
self._tw.write("\r" + line, **markup) self._tw.write("\r" + line + fill, **markup)
def write_sep(self, sep, title=None, **markup): def write_sep(self, sep, title=None, **markup):
self.ensure_newline() self.ensure_newline()
@ -190,7 +270,7 @@ class TerminalReporter:
self._tw.line(msg, **kw) self._tw.line(msg, **kw)
def pytest_internalerror(self, excrepr): def pytest_internalerror(self, excrepr):
for line in py.builtin.text(excrepr).split("\n"): for line in six.text_type(excrepr).split("\n"):
self.write_line("INTERNALERROR> " + line) self.write_line("INTERNALERROR> " + line)
return 1 return 1
@ -225,38 +305,76 @@ class TerminalReporter:
rep = report rep = report
res = self.config.hook.pytest_report_teststatus(report=rep) res = self.config.hook.pytest_report_teststatus(report=rep)
cat, letter, word = res cat, letter, word = res
if isinstance(word, tuple):
word, markup = word
else:
markup = None
self.stats.setdefault(cat, []).append(rep) self.stats.setdefault(cat, []).append(rep)
self._tests_ran = True self._tests_ran = True
if not letter and not word: if not letter and not word:
# probably passed setup/teardown # probably passed setup/teardown
return return
running_xdist = hasattr(rep, 'node')
if self.verbosity <= 0: if self.verbosity <= 0:
if not hasattr(rep, 'node') and self.showfspath: if not running_xdist and self.showfspath:
self.write_fspath_result(rep.nodeid, letter) self.write_fspath_result(rep.nodeid, letter)
else: else:
self._tw.write(letter) self._tw.write(letter)
else: else:
if isinstance(word, tuple): self._progress_nodeids_reported.add(rep.nodeid)
word, markup = word if markup is None:
else:
if rep.passed: if rep.passed:
markup = {'green':True} markup = {'green': True}
elif rep.failed: elif rep.failed:
markup = {'red':True} markup = {'red': True}
elif rep.skipped: elif rep.skipped:
markup = {'yellow':True} markup = {'yellow': True}
else:
markup = {}
line = self._locationline(rep.nodeid, *rep.location) line = self._locationline(rep.nodeid, *rep.location)
if not hasattr(rep, 'node'): if not running_xdist:
self.write_ensure_prefix(line, word, **markup) self.write_ensure_prefix(line, word, **markup)
#self._tw.write(word, **markup) if self._show_progress_info:
self._write_progress_information_filling_space()
else: else:
self.ensure_newline() self.ensure_newline()
if hasattr(rep, 'node'): self._tw.write("[%s]" % rep.node.gateway.id)
self._tw.write("[%s] " % rep.node.gateway.id) if self._show_progress_info:
self._tw.write(self._get_progress_information_message() + " ", cyan=True)
else:
self._tw.write(' ')
self._tw.write(word, **markup) self._tw.write(word, **markup)
self._tw.write(" " + line) self._tw.write(" " + line)
self.currentfspath = -2 self.currentfspath = -2
def pytest_runtest_logfinish(self, nodeid):
if self.verbosity <= 0 and self._show_progress_info:
self._progress_nodeids_reported.add(nodeid)
last_item = len(self._progress_nodeids_reported) == self._session.testscollected
if last_item:
self._write_progress_information_filling_space()
else:
past_edge = self._tw.chars_on_current_line + self._PROGRESS_LENGTH + 1 >= self._screen_width
if past_edge:
msg = self._get_progress_information_message()
self._tw.write(msg + '\n', cyan=True)
_PROGRESS_LENGTH = len(' [100%]')
def _get_progress_information_message(self):
if self.config.getoption('capture') == 'no':
return ''
collected = self._session.testscollected
if collected:
progress = len(self._progress_nodeids_reported) * 100 // collected
return ' [{:3d}%]'.format(progress)
return ' [100%]'
def _write_progress_information_filling_space(self):
msg = self._get_progress_information_message()
fill = ' ' * (self._tw.fullwidth - self._tw.chars_on_current_line - len(msg) - 1)
self.write(fill + msg, cyan=True)
def pytest_collection(self): def pytest_collection(self):
if not self.isatty and self.config.option.verbose >= 1: if not self.isatty and self.config.option.verbose >= 1:
self.write("collecting ... ", bold=True) self.write("collecting ... ", bold=True)
@ -269,7 +387,7 @@ class TerminalReporter:
items = [x for x in report.result if isinstance(x, pytest.Item)] items = [x for x in report.result if isinstance(x, pytest.Item)]
self._numcollected += len(items) self._numcollected += len(items)
if self.isatty: if self.isatty:
#self.write_fspath_result(report.nodeid, 'E') # self.write_fspath_result(report.nodeid, 'E')
self.report_collect() self.report_collect()
def report_collect(self, final=False): def report_collect(self, final=False):
@ -278,6 +396,7 @@ class TerminalReporter:
errors = len(self.stats.get('error', [])) errors = len(self.stats.get('error', []))
skipped = len(self.stats.get('skipped', [])) skipped = len(self.stats.get('skipped', []))
deselected = len(self.stats.get('deselected', []))
if final: if final:
line = "collected " line = "collected "
else: else:
@ -285,20 +404,24 @@ class TerminalReporter:
line += str(self._numcollected) + " item" + ('' if self._numcollected == 1 else 's') line += str(self._numcollected) + " item" + ('' if self._numcollected == 1 else 's')
if errors: if errors:
line += " / %d errors" % errors line += " / %d errors" % errors
if deselected:
line += " / %d deselected" % deselected
if skipped: if skipped:
line += " / %d skipped" % skipped line += " / %d skipped" % skipped
if self.isatty: if self.isatty:
self.rewrite(line, bold=True, erase=True)
if final: if final:
line += " \n" self.write('\n')
self.rewrite(line, bold=True)
else: else:
self.write_line(line) self.write_line(line)
@pytest.hookimpl(trylast=True)
def pytest_collection_modifyitems(self): def pytest_collection_modifyitems(self):
self.report_collect(True) self.report_collect(True)
@pytest.hookimpl(trylast=True) @pytest.hookimpl(trylast=True)
def pytest_sessionstart(self, session): def pytest_sessionstart(self, session):
self._session = session
self._sessionstarttime = time.time() self._sessionstarttime = time.time()
if not self.showheader: if not self.showheader:
return return
@ -316,8 +439,11 @@ class TerminalReporter:
self.write_line(msg) self.write_line(msg)
lines = self.config.hook.pytest_report_header( lines = self.config.hook.pytest_report_header(
config=self.config, startdir=self.startdir) config=self.config, startdir=self.startdir)
self._write_report_lines_from_hooks(lines)
def _write_report_lines_from_hooks(self, lines):
lines.reverse() lines.reverse()
for line in flatten(lines): for line in collapse(lines):
self.write_line(line) self.write_line(line)
def pytest_report_header(self, config): def pytest_report_header(self, config):
@ -342,10 +468,9 @@ class TerminalReporter:
rep.toterminal(self._tw) rep.toterminal(self._tw)
return 1 return 1
return 0 return 0
if not self.showheader: lines = self.config.hook.pytest_report_collectionfinish(
return config=self.config, startdir=self.startdir, items=session.items)
#for i, testarg in enumerate(self.config.args): self._write_report_lines_from_hooks(lines)
# self.write_line("test path %d: %s" %(i+1, testarg))
def _printcollecteditems(self, items): def _printcollecteditems(self, items):
# to print out items and their parent collectors # to print out items and their parent collectors
@ -375,7 +500,7 @@ class TerminalReporter:
stack.pop() stack.pop()
for col in needed_collectors[len(stack):]: for col in needed_collectors[len(stack):]:
stack.append(col) stack.append(col)
#if col.name == "()": # if col.name == "()":
# continue # continue
indent = (len(stack) - 1) * " " indent = (len(stack) - 1) * " "
self._tw.line("%s%s" % (indent, col)) self._tw.line("%s%s" % (indent, col))
@ -391,16 +516,19 @@ class TerminalReporter:
if exitstatus in summary_exit_codes: if exitstatus in summary_exit_codes:
self.config.hook.pytest_terminal_summary(terminalreporter=self, self.config.hook.pytest_terminal_summary(terminalreporter=self,
exitstatus=exitstatus) exitstatus=exitstatus)
self.summary_errors()
self.summary_failures()
self.summary_warnings()
self.summary_passes()
if exitstatus == EXIT_INTERRUPTED: if exitstatus == EXIT_INTERRUPTED:
self._report_keyboardinterrupt() self._report_keyboardinterrupt()
del self._keyboardinterrupt_memo del self._keyboardinterrupt_memo
self.summary_deselected()
self.summary_stats() self.summary_stats()
@pytest.hookimpl(hookwrapper=True)
def pytest_terminal_summary(self):
self.summary_errors()
self.summary_failures()
yield
self.summary_warnings()
self.summary_passes()
def pytest_keyboard_interrupt(self, excinfo): def pytest_keyboard_interrupt(self, excinfo):
self._keyboardinterrupt_memo = excinfo.getrepr(funcargs=True) self._keyboardinterrupt_memo = excinfo.getrepr(funcargs=True)
@ -424,15 +552,15 @@ class TerminalReporter:
line = self.config.cwd_relative_nodeid(nodeid) line = self.config.cwd_relative_nodeid(nodeid)
if domain and line.endswith(domain): if domain and line.endswith(domain):
line = line[:-len(domain)] line = line[:-len(domain)]
l = domain.split("[") values = domain.split("[")
l[0] = l[0].replace('.', '::') # don't replace '.' in params values[0] = values[0].replace('.', '::') # don't replace '.' in params
line += "[".join(l) line += "[".join(values)
return line return line
# collect_fspath comes from testid which has a "/"-normalized path # collect_fspath comes from testid which has a "/"-normalized path
if fspath: if fspath:
res = mkrel(nodeid).replace("::()", "") # parens-normalization res = mkrel(nodeid).replace("::()", "") # parens-normalization
if nodeid.split("::")[0] != fspath.replace("\\", "/"): if nodeid.split("::")[0] != fspath.replace("\\", nodes.SEP):
res += " <- " + self.startdir.bestrelpath(fspath) res += " <- " + self.startdir.bestrelpath(fspath)
else: else:
res = "[location]" res = "[location]"
@ -458,11 +586,11 @@ class TerminalReporter:
# summaries for sessionfinish # summaries for sessionfinish
# #
def getreports(self, name): def getreports(self, name):
l = [] values = []
for x in self.stats.get(name, []): for x in self.stats.get(name, []):
if not hasattr(x, '_pdbshown'): if not hasattr(x, '_pdbshown'):
l.append(x) values.append(x)
return l return values
def summary_warnings(self): def summary_warnings(self):
if self.hasopt("w"): if self.hasopt("w"):
@ -473,9 +601,9 @@ class TerminalReporter:
grouped = itertools.groupby(all_warnings, key=lambda wr: wr.get_location(self.config)) grouped = itertools.groupby(all_warnings, key=lambda wr: wr.get_location(self.config))
self.write_sep("=", "warnings summary", yellow=True, bold=False) self.write_sep("=", "warnings summary", yellow=True, bold=False)
for location, warnings in grouped: for location, warning_records in grouped:
self._tw.line(str(location) or '<undetermined location>') self._tw.line(str(location) or '<undetermined location>')
for w in warnings: for w in warning_records:
lines = w.message.splitlines() lines = w.message.splitlines()
indented = '\n'.join(' ' + x for x in lines) indented = '\n'.join(' ' + x for x in lines)
self._tw.line(indented) self._tw.line(indented)
@ -502,7 +630,6 @@ class TerminalReporter:
content = content[:-1] content = content[:-1]
self._tw.line(content) self._tw.line(content)
def summary_failures(self): def summary_failures(self):
if self.config.option.tbstyle != "no": if self.config.option.tbstyle != "no":
reports = self.getreports('failed') reports = self.getreports('failed')
@ -542,7 +669,12 @@ class TerminalReporter:
def _outrep_summary(self, rep): def _outrep_summary(self, rep):
rep.toterminal(self._tw) rep.toterminal(self._tw)
showcapture = self.config.option.showcapture
if showcapture == 'no':
return
for secname, content in rep.sections: for secname, content in rep.sections:
if showcapture != 'all' and showcapture not in secname:
continue
self._tw.sep("-", secname) self._tw.sep("-", secname)
if content[-1:] == "\n": if content[-1:] == "\n":
content = content[:-1] content = content[:-1]
@ -559,10 +691,6 @@ class TerminalReporter:
if self.verbosity == -1: if self.verbosity == -1:
self.write_line(msg, **markup) self.write_line(msg, **markup)
def summary_deselected(self):
if 'deselected' in self.stats:
self.write_sep("=", "%d tests deselected" % (
len(self.stats['deselected'])), bold=True)
def repr_pythonversion(v=None): def repr_pythonversion(v=None):
if v is None: if v is None:
@ -572,13 +700,6 @@ def repr_pythonversion(v=None):
except (TypeError, ValueError): except (TypeError, ValueError):
return str(v) return str(v)
def flatten(l):
for x in l:
if isinstance(x, (list, tuple)):
for y in flatten(x):
yield y
else:
yield x
def build_summary_stats_line(stats): def build_summary_stats_line(stats):
keys = ("failed passed skipped deselected " keys = ("failed passed skipped deselected "
@ -613,7 +734,7 @@ def build_summary_stats_line(stats):
def _plugin_nameversions(plugininfo): def _plugin_nameversions(plugininfo):
l = [] values = []
for plugin, dist in plugininfo: for plugin, dist in plugininfo:
# gets us name and version! # gets us name and version!
name = '{dist.project_name}-{dist.version}'.format(dist=dist) name = '{dist.project_name}-{dist.version}'.format(dist=dist)
@ -622,6 +743,6 @@ def _plugin_nameversions(plugininfo):
name = name[7:] name = name[7:]
# we decided to print python package names # we decided to print python package names
# they can have more than one plugin # they can have more than one plugin
if name not in l: if name not in values:
l.append(name) values.append(name)
return l return values

View File

@ -8,7 +8,7 @@ import py
from _pytest.monkeypatch import MonkeyPatch from _pytest.monkeypatch import MonkeyPatch
class TempdirFactory: class TempdirFactory(object):
"""Factory for temporary directories under the common base temp directory. """Factory for temporary directories under the common base temp directory.
The base directory can be configured using the ``--basetemp`` option. The base directory can be configured using the ``--basetemp`` option.
@ -25,7 +25,7 @@ class TempdirFactory:
provides an empty unique-per-test-invocation directory provides an empty unique-per-test-invocation directory
and is guaranteed to be empty. and is guaranteed to be empty.
""" """
#py.log._apiwarn(">1.1", "use tmpdir function argument") # py.log._apiwarn(">1.1", "use tmpdir function argument")
return self.getbasetemp().ensure(string, dir=dir) return self.getbasetemp().ensure(string, dir=dir)
def mktemp(self, basename, numbered=True): def mktemp(self, basename, numbered=True):
@ -116,6 +116,8 @@ def tmpdir(request, tmpdir_factory):
created as a sub directory of the base temporary created as a sub directory of the base temporary
directory. The returned object is a `py.path.local`_ directory. The returned object is a `py.path.local`_
path object. path object.
.. _`py.path.local`: https://py.readthedocs.io/en/latest/path.html
""" """
name = request.node.name name = request.node.name
name = re.sub(r"[\W]", "_", name) name = re.sub(r"[\W]", "_", name)

View File

@ -7,9 +7,8 @@ import traceback
# for transferring markers # for transferring markers
import _pytest._code import _pytest._code
from _pytest.config import hookimpl from _pytest.config import hookimpl
from _pytest.runner import fail, skip from _pytest.outcomes import fail, skip, xfail
from _pytest.python import transfer_markers, Class, Module, Function from _pytest.python import transfer_markers, Class, Module, Function
from _pytest.skipping import MarkEvaluator, xfail
def pytest_pycollect_makeitem(collector, name, obj): def pytest_pycollect_makeitem(collector, name, obj):
@ -109,13 +108,13 @@ class TestCaseFunction(Function):
except TypeError: except TypeError:
try: try:
try: try:
l = traceback.format_exception(*rawexcinfo) values = traceback.format_exception(*rawexcinfo)
l.insert(0, "NOTE: Incompatible Exception Representation, " values.insert(0, "NOTE: Incompatible Exception Representation, "
"displaying natively:\n\n") "displaying natively:\n\n")
fail("".join(l), pytrace=False) fail("".join(values), pytrace=False)
except (fail.Exception, KeyboardInterrupt): except (fail.Exception, KeyboardInterrupt):
raise raise
except: except: # noqa
fail("ERROR: Unknown Incompatible Exception " fail("ERROR: Unknown Incompatible Exception "
"representation:\n%r" % (rawexcinfo,), pytrace=False) "representation:\n%r" % (rawexcinfo,), pytrace=False)
except KeyboardInterrupt: except KeyboardInterrupt:
@ -134,8 +133,7 @@ class TestCaseFunction(Function):
try: try:
skip(reason) skip(reason)
except skip.Exception: except skip.Exception:
self._evalskip = MarkEvaluator(self, 'SkipTest') self._skipped_by_mark = True
self._evalskip.result = True
self._addexcinfo(sys.exc_info()) self._addexcinfo(sys.exc_info())
def addExpectedFailure(self, testcase, rawexcinfo, reason=""): def addExpectedFailure(self, testcase, rawexcinfo, reason=""):

View File

@ -1,13 +0,0 @@
This directory vendors the `pluggy` module.
For a more detailed discussion for the reasons to vendoring this
package, please see [this issue](https://github.com/pytest-dev/pytest/issues/944).
To update the current version, execute:
```
$ pip install -U pluggy==<version> --no-compile --target=_pytest/vendored_packages
```
And commit the modified files. The `pluggy-<version>.dist-info` directory
created by `pip` should be added as well.

View File

@ -1,11 +0,0 @@
Plugin registration and hook calling for Python
===============================================
This is the plugin manager as used by pytest but stripped
of pytest specific details.
During the 0.x series this plugin does not have much documentation
except extensive docstrings in the pluggy.py module.

View File

@ -1,22 +0,0 @@
The MIT License (MIT)
Copyright (c) 2015 holger krekel (rather uses bitbucket/hpk42)
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@ -1,40 +0,0 @@
Metadata-Version: 2.0
Name: pluggy
Version: 0.4.0
Summary: plugin and hook calling mechanisms for python
Home-page: https://github.com/pytest-dev/pluggy
Author: Holger Krekel
Author-email: holger at merlinux.eu
License: MIT license
Platform: unix
Platform: linux
Platform: osx
Platform: win32
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: POSIX
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: MacOS :: MacOS X
Classifier: Topic :: Software Development :: Testing
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: Utilities
Classifier: Programming Language :: Python :: 2
Classifier: Programming Language :: Python :: 2.6
Classifier: Programming Language :: Python :: 2.7
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.3
Classifier: Programming Language :: Python :: 3.4
Classifier: Programming Language :: Python :: 3.5
Plugin registration and hook calling for Python
===============================================
This is the plugin manager as used by pytest but stripped
of pytest specific details.
During the 0.x series this plugin does not have much documentation
except extensive docstrings in the pluggy.py module.

View File

@ -1,9 +0,0 @@
pluggy.py,sha256=u0oG9cv-oLOkNvEBlwnnu8pp1AyxpoERgUO00S3rvpQ,31543
pluggy-0.4.0.dist-info/DESCRIPTION.rst,sha256=ltvjkFd40LW_xShthp6RRVM6OB_uACYDFR3kTpKw7o4,307
pluggy-0.4.0.dist-info/LICENSE.txt,sha256=ruwhUOyV1HgE9F35JVL9BCZ9vMSALx369I4xq9rhpkM,1134
pluggy-0.4.0.dist-info/METADATA,sha256=pe2hbsqKFaLHC6wAQPpFPn0KlpcPfLBe_BnS4O70bfk,1364
pluggy-0.4.0.dist-info/RECORD,,
pluggy-0.4.0.dist-info/WHEEL,sha256=9Z5Xm-eel1bTS7e6ogYiKz0zmPEqDwIypurdHN1hR40,116
pluggy-0.4.0.dist-info/metadata.json,sha256=T3go5L2qOa_-H-HpCZi3EoVKb8sZ3R-fOssbkWo2nvM,1119
pluggy-0.4.0.dist-info/top_level.txt,sha256=xKSCRhai-v9MckvMuWqNz16c1tbsmOggoMSwTgcpYHE,7
pluggy-0.4.0.dist-info/INSTALLER,sha256=zuuue4knoyJ-UwPPXg8fezS7VCrXJQrAP7zeNuwvFQg,4

View File

@ -1,6 +0,0 @@
Wheel-Version: 1.0
Generator: bdist_wheel (0.29.0)
Root-Is-Purelib: true
Tag: py2-none-any
Tag: py3-none-any

View File

@ -1 +0,0 @@
{"classifiers": ["Development Status :: 4 - Beta", "Intended Audience :: Developers", "License :: OSI Approved :: MIT License", "Operating System :: POSIX", "Operating System :: Microsoft :: Windows", "Operating System :: MacOS :: MacOS X", "Topic :: Software Development :: Testing", "Topic :: Software Development :: Libraries", "Topic :: Utilities", "Programming Language :: Python :: 2", "Programming Language :: Python :: 2.6", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.3", "Programming Language :: Python :: 3.4", "Programming Language :: Python :: 3.5"], "extensions": {"python.details": {"contacts": [{"email": "holger at merlinux.eu", "name": "Holger Krekel", "role": "author"}], "document_names": {"description": "DESCRIPTION.rst", "license": "LICENSE.txt"}, "project_urls": {"Home": "https://github.com/pytest-dev/pluggy"}}}, "generator": "bdist_wheel (0.29.0)", "license": "MIT license", "metadata_version": "2.0", "name": "pluggy", "platform": "unix", "summary": "plugin and hook calling mechanisms for python", "version": "0.4.0"}

View File

@ -1,802 +0,0 @@
"""
PluginManager, basic initialization and tracing.
pluggy is the cristallized core of plugin management as used
by some 150 plugins for pytest.
Pluggy uses semantic versioning. Breaking changes are only foreseen for
Major releases (incremented X in "X.Y.Z"). If you want to use pluggy in
your project you should thus use a dependency restriction like
"pluggy>=0.1.0,<1.0" to avoid surprises.
pluggy is concerned with hook specification, hook implementations and hook
calling. For any given hook specification a hook call invokes up to N implementations.
A hook implementation can influence its position and type of execution:
if attributed "tryfirst" or "trylast" it will be tried to execute
first or last. However, if attributed "hookwrapper" an implementation
can wrap all calls to non-hookwrapper implementations. A hookwrapper
can thus execute some code ahead and after the execution of other hooks.
Hook specification is done by way of a regular python function where
both the function name and the names of all its arguments are significant.
Each hook implementation function is verified against the original specification
function, including the names of all its arguments. To allow for hook specifications
to evolve over the livetime of a project, hook implementations can
accept less arguments. One can thus add new arguments and semantics to
a hook specification by adding another argument typically without breaking
existing hook implementations.
The chosen approach is meant to let a hook designer think carefuly about
which objects are needed by an extension writer. By contrast, subclass-based
extension mechanisms often expose a lot more state and behaviour than needed,
thus restricting future developments.
Pluggy currently consists of functionality for:
- a way to register new hook specifications. Without a hook
specification no hook calling can be performed.
- a registry of plugins which contain hook implementation functions. It
is possible to register plugins for which a hook specification is not yet
known and validate all hooks when the system is in a more referentially
consistent state. Setting an "optionalhook" attribution to a hook
implementation will avoid PluginValidationError's if a specification
is missing. This allows to have optional integration between plugins.
- a "hook" relay object from which you can launch 1:N calls to
registered hook implementation functions
- a mechanism for ordering hook implementation functions
- mechanisms for two different type of 1:N calls: "firstresult" for when
the call should stop when the first implementation returns a non-None result.
And the other (default) way of guaranteeing that all hook implementations
will be called and their non-None result collected.
- mechanisms for "historic" extension points such that all newly
registered functions will receive all hook calls that happened
before their registration.
- a mechanism for discovering plugin objects which are based on
setuptools based entry points.
- a simple tracing mechanism, including tracing of plugin calls and
their arguments.
"""
import sys
import inspect
__version__ = '0.4.0'
__all__ = ["PluginManager", "PluginValidationError", "HookCallError",
"HookspecMarker", "HookimplMarker"]
_py3 = sys.version_info > (3, 0)
class HookspecMarker:
""" Decorator helper class for marking functions as hook specifications.
You can instantiate it with a project_name to get a decorator.
Calling PluginManager.add_hookspecs later will discover all marked functions
if the PluginManager uses the same project_name.
"""
def __init__(self, project_name):
self.project_name = project_name
def __call__(self, function=None, firstresult=False, historic=False):
""" if passed a function, directly sets attributes on the function
which will make it discoverable to add_hookspecs(). If passed no
function, returns a decorator which can be applied to a function
later using the attributes supplied.
If firstresult is True the 1:N hook call (N being the number of registered
hook implementation functions) will stop at I<=N when the I'th function
returns a non-None result.
If historic is True calls to a hook will be memorized and replayed
on later registered plugins.
"""
def setattr_hookspec_opts(func):
if historic and firstresult:
raise ValueError("cannot have a historic firstresult hook")
setattr(func, self.project_name + "_spec",
dict(firstresult=firstresult, historic=historic))
return func
if function is not None:
return setattr_hookspec_opts(function)
else:
return setattr_hookspec_opts
class HookimplMarker:
""" Decorator helper class for marking functions as hook implementations.
You can instantiate with a project_name to get a decorator.
Calling PluginManager.register later will discover all marked functions
if the PluginManager uses the same project_name.
"""
def __init__(self, project_name):
self.project_name = project_name
def __call__(self, function=None, hookwrapper=False, optionalhook=False,
tryfirst=False, trylast=False):
""" if passed a function, directly sets attributes on the function
which will make it discoverable to register(). If passed no function,
returns a decorator which can be applied to a function later using
the attributes supplied.
If optionalhook is True a missing matching hook specification will not result
in an error (by default it is an error if no matching spec is found).
If tryfirst is True this hook implementation will run as early as possible
in the chain of N hook implementations for a specfication.
If trylast is True this hook implementation will run as late as possible
in the chain of N hook implementations.
If hookwrapper is True the hook implementations needs to execute exactly
one "yield". The code before the yield is run early before any non-hookwrapper
function is run. The code after the yield is run after all non-hookwrapper
function have run. The yield receives an ``_CallOutcome`` object representing
the exception or result outcome of the inner calls (including other hookwrapper
calls).
"""
def setattr_hookimpl_opts(func):
setattr(func, self.project_name + "_impl",
dict(hookwrapper=hookwrapper, optionalhook=optionalhook,
tryfirst=tryfirst, trylast=trylast))
return func
if function is None:
return setattr_hookimpl_opts
else:
return setattr_hookimpl_opts(function)
def normalize_hookimpl_opts(opts):
opts.setdefault("tryfirst", False)
opts.setdefault("trylast", False)
opts.setdefault("hookwrapper", False)
opts.setdefault("optionalhook", False)
class _TagTracer:
def __init__(self):
self._tag2proc = {}
self.writer = None
self.indent = 0
def get(self, name):
return _TagTracerSub(self, (name,))
def format_message(self, tags, args):
if isinstance(args[-1], dict):
extra = args[-1]
args = args[:-1]
else:
extra = {}
content = " ".join(map(str, args))
indent = " " * self.indent
lines = [
"%s%s [%s]\n" % (indent, content, ":".join(tags))
]
for name, value in extra.items():
lines.append("%s %s: %s\n" % (indent, name, value))
return lines
def processmessage(self, tags, args):
if self.writer is not None and args:
lines = self.format_message(tags, args)
self.writer(''.join(lines))
try:
self._tag2proc[tags](tags, args)
except KeyError:
pass
def setwriter(self, writer):
self.writer = writer
def setprocessor(self, tags, processor):
if isinstance(tags, str):
tags = tuple(tags.split(":"))
else:
assert isinstance(tags, tuple)
self._tag2proc[tags] = processor
class _TagTracerSub:
def __init__(self, root, tags):
self.root = root
self.tags = tags
def __call__(self, *args):
self.root.processmessage(self.tags, args)
def setmyprocessor(self, processor):
self.root.setprocessor(self.tags, processor)
def get(self, name):
return self.__class__(self.root, self.tags + (name,))
def _raise_wrapfail(wrap_controller, msg):
co = wrap_controller.gi_code
raise RuntimeError("wrap_controller at %r %s:%d %s" %
(co.co_name, co.co_filename, co.co_firstlineno, msg))
def _wrapped_call(wrap_controller, func):
""" Wrap calling to a function with a generator which needs to yield
exactly once. The yield point will trigger calling the wrapped function
and return its _CallOutcome to the yield point. The generator then needs
to finish (raise StopIteration) in order for the wrapped call to complete.
"""
try:
next(wrap_controller) # first yield
except StopIteration:
_raise_wrapfail(wrap_controller, "did not yield")
call_outcome = _CallOutcome(func)
try:
wrap_controller.send(call_outcome)
_raise_wrapfail(wrap_controller, "has second yield")
except StopIteration:
pass
return call_outcome.get_result()
class _CallOutcome:
""" Outcome of a function call, either an exception or a proper result.
Calling the ``get_result`` method will return the result or reraise
the exception raised when the function was called. """
excinfo = None
def __init__(self, func):
try:
self.result = func()
except BaseException:
self.excinfo = sys.exc_info()
def force_result(self, result):
self.result = result
self.excinfo = None
def get_result(self):
if self.excinfo is None:
return self.result
else:
ex = self.excinfo
if _py3:
raise ex[1].with_traceback(ex[2])
_reraise(*ex) # noqa
if not _py3:
exec("""
def _reraise(cls, val, tb):
raise cls, val, tb
""")
class _TracedHookExecution:
def __init__(self, pluginmanager, before, after):
self.pluginmanager = pluginmanager
self.before = before
self.after = after
self.oldcall = pluginmanager._inner_hookexec
assert not isinstance(self.oldcall, _TracedHookExecution)
self.pluginmanager._inner_hookexec = self
def __call__(self, hook, hook_impls, kwargs):
self.before(hook.name, hook_impls, kwargs)
outcome = _CallOutcome(lambda: self.oldcall(hook, hook_impls, kwargs))
self.after(outcome, hook.name, hook_impls, kwargs)
return outcome.get_result()
def undo(self):
self.pluginmanager._inner_hookexec = self.oldcall
class PluginManager(object):
""" Core Pluginmanager class which manages registration
of plugin objects and 1:N hook calling.
You can register new hooks by calling ``add_hookspec(module_or_class)``.
You can register plugin objects (which contain hooks) by calling
``register(plugin)``. The Pluginmanager is initialized with a
prefix that is searched for in the names of the dict of registered
plugin objects. An optional excludefunc allows to blacklist names which
are not considered as hooks despite a matching prefix.
For debugging purposes you can call ``enable_tracing()``
which will subsequently send debug information to the trace helper.
"""
def __init__(self, project_name, implprefix=None):
""" if implprefix is given implementation functions
will be recognized if their name matches the implprefix. """
self.project_name = project_name
self._name2plugin = {}
self._plugin2hookcallers = {}
self._plugin_distinfo = []
self.trace = _TagTracer().get("pluginmanage")
self.hook = _HookRelay(self.trace.root.get("hook"))
self._implprefix = implprefix
self._inner_hookexec = lambda hook, methods, kwargs: \
_MultiCall(methods, kwargs, hook.spec_opts).execute()
def _hookexec(self, hook, methods, kwargs):
# called from all hookcaller instances.
# enable_tracing will set its own wrapping function at self._inner_hookexec
return self._inner_hookexec(hook, methods, kwargs)
def register(self, plugin, name=None):
""" Register a plugin and return its canonical name or None if the name
is blocked from registering. Raise a ValueError if the plugin is already
registered. """
plugin_name = name or self.get_canonical_name(plugin)
if plugin_name in self._name2plugin or plugin in self._plugin2hookcallers:
if self._name2plugin.get(plugin_name, -1) is None:
return # blocked plugin, return None to indicate no registration
raise ValueError("Plugin already registered: %s=%s\n%s" %
(plugin_name, plugin, self._name2plugin))
# XXX if an error happens we should make sure no state has been
# changed at point of return
self._name2plugin[plugin_name] = plugin
# register matching hook implementations of the plugin
self._plugin2hookcallers[plugin] = hookcallers = []
for name in dir(plugin):
hookimpl_opts = self.parse_hookimpl_opts(plugin, name)
if hookimpl_opts is not None:
normalize_hookimpl_opts(hookimpl_opts)
method = getattr(plugin, name)
hookimpl = HookImpl(plugin, plugin_name, method, hookimpl_opts)
hook = getattr(self.hook, name, None)
if hook is None:
hook = _HookCaller(name, self._hookexec)
setattr(self.hook, name, hook)
elif hook.has_spec():
self._verify_hook(hook, hookimpl)
hook._maybe_apply_history(hookimpl)
hook._add_hookimpl(hookimpl)
hookcallers.append(hook)
return plugin_name
def parse_hookimpl_opts(self, plugin, name):
method = getattr(plugin, name)
try:
res = getattr(method, self.project_name + "_impl", None)
except Exception:
res = {}
if res is not None and not isinstance(res, dict):
# false positive
res = None
elif res is None and self._implprefix and name.startswith(self._implprefix):
res = {}
return res
def unregister(self, plugin=None, name=None):
""" unregister a plugin object and all its contained hook implementations
from internal data structures. """
if name is None:
assert plugin is not None, "one of name or plugin needs to be specified"
name = self.get_name(plugin)
if plugin is None:
plugin = self.get_plugin(name)
# if self._name2plugin[name] == None registration was blocked: ignore
if self._name2plugin.get(name):
del self._name2plugin[name]
for hookcaller in self._plugin2hookcallers.pop(plugin, []):
hookcaller._remove_plugin(plugin)
return plugin
def set_blocked(self, name):
""" block registrations of the given name, unregister if already registered. """
self.unregister(name=name)
self._name2plugin[name] = None
def is_blocked(self, name):
""" return True if the name blogs registering plugins of that name. """
return name in self._name2plugin and self._name2plugin[name] is None
def add_hookspecs(self, module_or_class):
""" add new hook specifications defined in the given module_or_class.
Functions are recognized if they have been decorated accordingly. """
names = []
for name in dir(module_or_class):
spec_opts = self.parse_hookspec_opts(module_or_class, name)
if spec_opts is not None:
hc = getattr(self.hook, name, None)
if hc is None:
hc = _HookCaller(name, self._hookexec, module_or_class, spec_opts)
setattr(self.hook, name, hc)
else:
# plugins registered this hook without knowing the spec
hc.set_specification(module_or_class, spec_opts)
for hookfunction in (hc._wrappers + hc._nonwrappers):
self._verify_hook(hc, hookfunction)
names.append(name)
if not names:
raise ValueError("did not find any %r hooks in %r" %
(self.project_name, module_or_class))
def parse_hookspec_opts(self, module_or_class, name):
method = getattr(module_or_class, name)
return getattr(method, self.project_name + "_spec", None)
def get_plugins(self):
""" return the set of registered plugins. """
return set(self._plugin2hookcallers)
def is_registered(self, plugin):
""" Return True if the plugin is already registered. """
return plugin in self._plugin2hookcallers
def get_canonical_name(self, plugin):
""" Return canonical name for a plugin object. Note that a plugin
may be registered under a different name which was specified
by the caller of register(plugin, name). To obtain the name
of an registered plugin use ``get_name(plugin)`` instead."""
return getattr(plugin, "__name__", None) or str(id(plugin))
def get_plugin(self, name):
""" Return a plugin or None for the given name. """
return self._name2plugin.get(name)
def has_plugin(self, name):
""" Return True if a plugin with the given name is registered. """
return self.get_plugin(name) is not None
def get_name(self, plugin):
""" Return name for registered plugin or None if not registered. """
for name, val in self._name2plugin.items():
if plugin == val:
return name
def _verify_hook(self, hook, hookimpl):
if hook.is_historic() and hookimpl.hookwrapper:
raise PluginValidationError(
"Plugin %r\nhook %r\nhistoric incompatible to hookwrapper" %
(hookimpl.plugin_name, hook.name))
for arg in hookimpl.argnames:
if arg not in hook.argnames:
raise PluginValidationError(
"Plugin %r\nhook %r\nargument %r not available\n"
"plugin definition: %s\n"
"available hookargs: %s" %
(hookimpl.plugin_name, hook.name, arg,
_formatdef(hookimpl.function), ", ".join(hook.argnames)))
def check_pending(self):
""" Verify that all hooks which have not been verified against
a hook specification are optional, otherwise raise PluginValidationError"""
for name in self.hook.__dict__:
if name[0] != "_":
hook = getattr(self.hook, name)
if not hook.has_spec():
for hookimpl in (hook._wrappers + hook._nonwrappers):
if not hookimpl.optionalhook:
raise PluginValidationError(
"unknown hook %r in plugin %r" %
(name, hookimpl.plugin))
def load_setuptools_entrypoints(self, entrypoint_name):
""" Load modules from querying the specified setuptools entrypoint name.
Return the number of loaded plugins. """
from pkg_resources import (iter_entry_points, DistributionNotFound,
VersionConflict)
for ep in iter_entry_points(entrypoint_name):
# is the plugin registered or blocked?
if self.get_plugin(ep.name) or self.is_blocked(ep.name):
continue
try:
plugin = ep.load()
except DistributionNotFound:
continue
except VersionConflict as e:
raise PluginValidationError(
"Plugin %r could not be loaded: %s!" % (ep.name, e))
self.register(plugin, name=ep.name)
self._plugin_distinfo.append((plugin, ep.dist))
return len(self._plugin_distinfo)
def list_plugin_distinfo(self):
""" return list of distinfo/plugin tuples for all setuptools registered
plugins. """
return list(self._plugin_distinfo)
def list_name_plugin(self):
""" return list of name/plugin pairs. """
return list(self._name2plugin.items())
def get_hookcallers(self, plugin):
""" get all hook callers for the specified plugin. """
return self._plugin2hookcallers.get(plugin)
def add_hookcall_monitoring(self, before, after):
""" add before/after tracing functions for all hooks
and return an undo function which, when called,
will remove the added tracers.
``before(hook_name, hook_impls, kwargs)`` will be called ahead
of all hook calls and receive a hookcaller instance, a list
of HookImpl instances and the keyword arguments for the hook call.
``after(outcome, hook_name, hook_impls, kwargs)`` receives the
same arguments as ``before`` but also a :py:class:`_CallOutcome <_pytest.vendored_packages.pluggy._CallOutcome>` object
which represents the result of the overall hook call.
"""
return _TracedHookExecution(self, before, after).undo
def enable_tracing(self):
""" enable tracing of hook calls and return an undo function. """
hooktrace = self.hook._trace
def before(hook_name, methods, kwargs):
hooktrace.root.indent += 1
hooktrace(hook_name, kwargs)
def after(outcome, hook_name, methods, kwargs):
if outcome.excinfo is None:
hooktrace("finish", hook_name, "-->", outcome.result)
hooktrace.root.indent -= 1
return self.add_hookcall_monitoring(before, after)
def subset_hook_caller(self, name, remove_plugins):
""" Return a new _HookCaller instance for the named method
which manages calls to all registered plugins except the
ones from remove_plugins. """
orig = getattr(self.hook, name)
plugins_to_remove = [plug for plug in remove_plugins if hasattr(plug, name)]
if plugins_to_remove:
hc = _HookCaller(orig.name, orig._hookexec, orig._specmodule_or_class,
orig.spec_opts)
for hookimpl in (orig._wrappers + orig._nonwrappers):
plugin = hookimpl.plugin
if plugin not in plugins_to_remove:
hc._add_hookimpl(hookimpl)
# we also keep track of this hook caller so it
# gets properly removed on plugin unregistration
self._plugin2hookcallers.setdefault(plugin, []).append(hc)
return hc
return orig
class _MultiCall:
""" execute a call into multiple python functions/methods. """
# XXX note that the __multicall__ argument is supported only
# for pytest compatibility reasons. It was never officially
# supported there and is explicitely deprecated since 2.8
# so we can remove it soon, allowing to avoid the below recursion
# in execute() and simplify/speed up the execute loop.
def __init__(self, hook_impls, kwargs, specopts={}):
self.hook_impls = hook_impls
self.kwargs = kwargs
self.kwargs["__multicall__"] = self
self.specopts = specopts
def execute(self):
all_kwargs = self.kwargs
self.results = results = []
firstresult = self.specopts.get("firstresult")
while self.hook_impls:
hook_impl = self.hook_impls.pop()
try:
args = [all_kwargs[argname] for argname in hook_impl.argnames]
except KeyError:
for argname in hook_impl.argnames:
if argname not in all_kwargs:
raise HookCallError(
"hook call must provide argument %r" % (argname,))
if hook_impl.hookwrapper:
return _wrapped_call(hook_impl.function(*args), self.execute)
res = hook_impl.function(*args)
if res is not None:
if firstresult:
return res
results.append(res)
if not firstresult:
return results
def __repr__(self):
status = "%d meths" % (len(self.hook_impls),)
if hasattr(self, "results"):
status = ("%d results, " % len(self.results)) + status
return "<_MultiCall %s, kwargs=%r>" % (status, self.kwargs)
def varnames(func, startindex=None):
""" return argument name tuple for a function, method, class or callable.
In case of a class, its "__init__" method is considered.
For methods the "self" parameter is not included unless you are passing
an unbound method with Python3 (which has no supports for unbound methods)
"""
cache = getattr(func, "__dict__", {})
try:
return cache["_varnames"]
except KeyError:
pass
if inspect.isclass(func):
try:
func = func.__init__
except AttributeError:
return ()
startindex = 1
else:
if not inspect.isfunction(func) and not inspect.ismethod(func):
try:
func = getattr(func, '__call__', func)
except Exception:
return ()
if startindex is None:
startindex = int(inspect.ismethod(func))
try:
rawcode = func.__code__
except AttributeError:
return ()
try:
x = rawcode.co_varnames[startindex:rawcode.co_argcount]
except AttributeError:
x = ()
else:
defaults = func.__defaults__
if defaults:
x = x[:-len(defaults)]
try:
cache["_varnames"] = x
except TypeError:
pass
return x
class _HookRelay:
""" hook holder object for performing 1:N hook calls where N is the number
of registered plugins.
"""
def __init__(self, trace):
self._trace = trace
class _HookCaller(object):
def __init__(self, name, hook_execute, specmodule_or_class=None, spec_opts=None):
self.name = name
self._wrappers = []
self._nonwrappers = []
self._hookexec = hook_execute
if specmodule_or_class is not None:
assert spec_opts is not None
self.set_specification(specmodule_or_class, spec_opts)
def has_spec(self):
return hasattr(self, "_specmodule_or_class")
def set_specification(self, specmodule_or_class, spec_opts):
assert not self.has_spec()
self._specmodule_or_class = specmodule_or_class
specfunc = getattr(specmodule_or_class, self.name)
argnames = varnames(specfunc, startindex=inspect.isclass(specmodule_or_class))
assert "self" not in argnames # sanity check
self.argnames = ["__multicall__"] + list(argnames)
self.spec_opts = spec_opts
if spec_opts.get("historic"):
self._call_history = []
def is_historic(self):
return hasattr(self, "_call_history")
def _remove_plugin(self, plugin):
def remove(wrappers):
for i, method in enumerate(wrappers):
if method.plugin == plugin:
del wrappers[i]
return True
if remove(self._wrappers) is None:
if remove(self._nonwrappers) is None:
raise ValueError("plugin %r not found" % (plugin,))
def _add_hookimpl(self, hookimpl):
if hookimpl.hookwrapper:
methods = self._wrappers
else:
methods = self._nonwrappers
if hookimpl.trylast:
methods.insert(0, hookimpl)
elif hookimpl.tryfirst:
methods.append(hookimpl)
else:
# find last non-tryfirst method
i = len(methods) - 1
while i >= 0 and methods[i].tryfirst:
i -= 1
methods.insert(i + 1, hookimpl)
def __repr__(self):
return "<_HookCaller %r>" % (self.name,)
def __call__(self, **kwargs):
assert not self.is_historic()
return self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
def call_historic(self, proc=None, kwargs=None):
self._call_history.append((kwargs or {}, proc))
# historizing hooks don't return results
self._hookexec(self, self._nonwrappers + self._wrappers, kwargs)
def call_extra(self, methods, kwargs):
""" Call the hook with some additional temporarily participating
methods using the specified kwargs as call parameters. """
old = list(self._nonwrappers), list(self._wrappers)
for method in methods:
opts = dict(hookwrapper=False, trylast=False, tryfirst=False)
hookimpl = HookImpl(None, "<temp>", method, opts)
self._add_hookimpl(hookimpl)
try:
return self(**kwargs)
finally:
self._nonwrappers, self._wrappers = old
def _maybe_apply_history(self, method):
if self.is_historic():
for kwargs, proc in self._call_history:
res = self._hookexec(self, [method], kwargs)
if res and proc is not None:
proc(res[0])
class HookImpl:
def __init__(self, plugin, plugin_name, function, hook_impl_opts):
self.function = function
self.argnames = varnames(self.function)
self.plugin = plugin
self.opts = hook_impl_opts
self.plugin_name = plugin_name
self.__dict__.update(hook_impl_opts)
class PluginValidationError(Exception):
""" plugin failed validation. """
class HookCallError(Exception):
""" Hook was called wrongly. """
if hasattr(inspect, 'signature'):
def _formatdef(func):
return "%s%s" % (
func.__name__,
str(inspect.signature(func))
)
else:
def _formatdef(func):
return "%s%s" % (
func.__name__,
inspect.formatargspec(*inspect.getargspec(func))
)

View File

@ -39,8 +39,9 @@ def pytest_addoption(parser):
'-W', '--pythonwarnings', action='append', '-W', '--pythonwarnings', action='append',
help="set which warnings to report, see -W option of python itself.") help="set which warnings to report, see -W option of python itself.")
parser.addini("filterwarnings", type="linelist", parser.addini("filterwarnings", type="linelist",
help="Each line specifies warning filter pattern which would be passed" help="Each line specifies a pattern for "
"to warnings.filterwarnings. Process after -W and --pythonwarnings.") "warnings.filterwarnings. "
"Processed after -W and --pythonwarnings.")
@contextmanager @contextmanager
@ -59,6 +60,11 @@ def catch_warnings_for_item(item):
for arg in inifilters: for arg in inifilters:
_setoption(warnings, arg) _setoption(warnings, arg)
for mark in item.iter_markers():
if mark.name == 'filterwarnings':
for arg in mark.args:
warnings._setoption(arg)
yield yield
for warning in log: for warning in log:
@ -66,8 +72,10 @@ def catch_warnings_for_item(item):
unicode_warning = False unicode_warning = False
if compat._PY2 and any(isinstance(m, compat.UNICODE_TYPES) for m in warn_msg.args): if compat._PY2 and any(isinstance(m, compat.UNICODE_TYPES) for m in warn_msg.args):
new_args = [compat.safe_str(m) for m in warn_msg.args] new_args = []
unicode_warning = warn_msg.args != new_args for m in warn_msg.args:
new_args.append(compat.ascii_escaped(m) if isinstance(m, compat.UNICODE_TYPES) else m)
unicode_warning = list(warn_msg.args) != new_args
warn_msg.args = new_args warn_msg.args = new_args
msg = warnings.formatwarning( msg = warnings.formatwarning(

View File

@ -10,9 +10,7 @@ environment:
- TOXENV: "coveralls" - TOXENV: "coveralls"
# note: please use "tox --listenvs" to populate the build matrix below # note: please use "tox --listenvs" to populate the build matrix below
- TOXENV: "linting" - TOXENV: "linting"
- TOXENV: "py26"
- TOXENV: "py27" - TOXENV: "py27"
- TOXENV: "py33"
- TOXENV: "py34" - TOXENV: "py34"
- TOXENV: "py35" - TOXENV: "py35"
- TOXENV: "py36" - TOXENV: "py36"
@ -20,12 +18,16 @@ environment:
- TOXENV: "py27-pexpect" - TOXENV: "py27-pexpect"
- TOXENV: "py27-xdist" - TOXENV: "py27-xdist"
- TOXENV: "py27-trial" - TOXENV: "py27-trial"
- TOXENV: "py35-pexpect" - TOXENV: "py27-numpy"
- TOXENV: "py35-xdist" - TOXENV: "py27-pluggymaster"
- TOXENV: "py35-trial" - TOXENV: "py36-pexpect"
- TOXENV: "py36-xdist"
- TOXENV: "py36-trial"
- TOXENV: "py36-numpy"
- TOXENV: "py36-pluggymaster"
- TOXENV: "py27-nobyte" - TOXENV: "py27-nobyte"
- TOXENV: "doctesting" - TOXENV: "doctesting"
- TOXENV: "freeze" - TOXENV: "py35-freeze"
- TOXENV: "docs" - TOXENV: "docs"
install: install:
@ -34,7 +36,7 @@ install:
- if "%TOXENV%" == "pypy" call scripts\install-pypy.bat - if "%TOXENV%" == "pypy" call scripts\install-pypy.bat
- C:\Python35\python -m pip install tox - C:\Python36\python -m pip install --upgrade --pre tox
build: false # Not a C# project, build stuff at the test step instead. build: false # Not a C# project, build stuff at the test step instead.

View File

@ -1 +0,0 @@
All old-style specific behavior in current classes in the pytest's API is considered deprecated at this point and will be removed in a future release. This affects Python 2 users only and in rare situations.

View File

@ -1 +0,0 @@
introduce deprecation warnings for legacy marks based parametersets

View File

@ -1 +0,0 @@
Fix decode error in Python 2 for doctests in docstrings.

View File

@ -1 +0,0 @@
Exceptions raised during teardown by finalizers are now suppressed until all finalizers are called, with the initial exception reraised.

View File

@ -1 +0,0 @@
Fix incorrect "collected items" report when specifying tests on the command-line.

View File

@ -1,4 +0,0 @@
``deprecated_call`` in context-manager form now captures deprecation warnings even if
the same warning has already been raised. Also, ``deprecated_call`` will always produce
the same error message (previously it would produce different messages in context-manager vs.
function-call mode).

View File

@ -1 +0,0 @@
Create invoke tasks for updating the vendored packages.

View File

@ -1 +0,0 @@
Fix internal error when trying to detect the start of a recursive traceback.

View File

@ -1 +0,0 @@
Internal code move: move code for pytest.approx/pytest.raises to own files in order to cut down the size of python.py

View File

@ -1 +0,0 @@
Explicitly state for which hooks the calls stop after the first non-None result.

View File

@ -1 +0,0 @@
Update copyright dates in LICENSE, README.rst and in the documentation.

View File

@ -1 +0,0 @@
Now test function objects have a ``pytestmark`` attribute containing a list of marks applied directly to the test function, as opposed to marks inherited from parent classes or modules.

View File

@ -0,0 +1 @@
A rare race-condition which might result in corrupted ``.pyc`` files on Windows has been hopefully solved.

View File

@ -0,0 +1 @@
``pytest`` now depends on the `python-atomicwrites <https://github.com/untitaker/python-atomicwrites>`_ library.

View File

@ -0,0 +1 @@
Support for Python 3.7's builtin ``breakpoint()`` method, see `Using the builtin breakpoint function <https://docs.pytest.org/en/latest/usage.html#breakpoint-builtin>`_ for details.

2
changelog/3290.feature Normal file
View File

@ -0,0 +1,2 @@
``monkeypatch`` now supports a ``context()`` function which acts as a context manager which undoes all patching done
within the ``with`` block.

View File

@ -0,0 +1,3 @@
pytest not longer changes the log level of the root logger when the
``log-level`` parameter has greater numeric value than that of the level of
the root logger, which makes it play better with custom logging configuration in user code.

1
changelog/3317.feature Normal file
View File

@ -0,0 +1 @@
introduce correct per node mark handling and deprecate the always incorrect existing mark handling

View File

@ -0,0 +1 @@
Remove internal ``_pytest.terminal.flatten`` function in favor of ``more_itertools.collapse``.

1
changelog/3339.trivial Normal file
View File

@ -0,0 +1 @@
Import some modules from ``collections`` instead of ``collections.abc`` as the former modules trigger ``DeprecationWarning`` in Python 3.7.

View File

@ -0,0 +1 @@
``pytest.raises`` now raises ``TypeError`` when receiving an unknown keyword argument.

2
changelog/3360.trivial Normal file
View File

@ -0,0 +1,2 @@
record_property is no longer experimental, removing the warnings was forgotten.

View File

@ -0,0 +1 @@
``pytest.raises`` now works with exception classes that look like iterables.

32
changelog/README.rst Normal file
View File

@ -0,0 +1,32 @@
This directory contains "newsfragments" which are short files that contain a small **ReST**-formatted
text that will be added to the next ``CHANGELOG``.
The ``CHANGELOG`` will be read by users, so this description should be aimed to pytest users
instead of describing internal changes which are only relevant to the developers.
Make sure to use full sentences with correct case and punctuation, for example::
Fix issue with non-ascii messages from the ``warnings`` module.
Each file should be named like ``<ISSUE>.<TYPE>.rst``, where
``<ISSUE>`` is an issue number, and ``<TYPE>`` is one of:
* ``feature``: new user facing features, like new command-line options and new behavior.
* ``bugfix``: fixes a reported bug.
* ``doc``: documentation improvement, like rewording an entire session or adding missing docs.
* ``removal``: feature deprecation or removal.
* ``vendor``: changes in packages vendored in pytest.
* ``trivial``: fixing a small typo or internal change that might be noteworthy.
So for example: ``123.feature.rst``, ``456.bugfix.rst``.
If your PR fixes an issue, use that number here. If there is no issue,
then after you submit the PR and get the PR number you can add a
changelog using that instead.
If you are not sure what issue type to use, don't hesitate to ask in your PR.
Note that the ``towncrier`` tool will automatically
reflow your text, so it will work best if you stick to a single paragraph, but multiple sentences and links are OK
and encouraged. You can install ``towncrier`` and then run ``towncrier --draft``
if you want to get a preview of how your change will look in the final release notes.

View File

@ -13,7 +13,8 @@
{% if definitions[category]['showcontent'] %} {% if definitions[category]['showcontent'] %}
{% for text, values in sections[section][category]|dictsort(by='value') %} {% for text, values in sections[section][category]|dictsort(by='value') %}
- {{ text }}{% if category != 'vendor' %} (`{{ values[0] }} <https://github.com/pytest-dev/pytest/issues/{{ values[0][1:] }}>`_){% endif %} {% set issue_joiner = joiner(', ') %}
- {{ text }}{% if category != 'vendor' %} ({% for value in values|sort %}{{ issue_joiner() }}`{{ value }} <https://github.com/pytest-dev/pytest/issues/{{ value[1:] }}>`_{% endfor %}){% endif %}
{% endfor %} {% endfor %}

View File

@ -13,8 +13,6 @@ PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
REGENDOC_ARGS := \ REGENDOC_ARGS := \
--normalize "/={8,} (.*) ={8,}/======= \1 ========/" \
--normalize "/_{8,} (.*) _{8,}/_______ \1 ________/" \
--normalize "/in \d+.\d+ seconds/in 0.12 seconds/" \ --normalize "/in \d+.\d+ seconds/in 0.12 seconds/" \
--normalize "@/tmp/pytest-of-.*/pytest-\d+@PYTEST_TMPDIR@" \ --normalize "@/tmp/pytest-of-.*/pytest-\d+@PYTEST_TMPDIR@" \
--normalize "@pytest-(\d+)\\.[^ ,]+@pytest-\1.x.y@" \ --normalize "@pytest-(\d+)\\.[^ ,]+@pytest-\1.x.y@" \

View File

@ -2,14 +2,16 @@
<ul> <ul>
<li><a href="{{ pathto('index') }}">Home</a></li> <li><a href="{{ pathto('index') }}">Home</a></li>
<li><a href="{{ pathto('contents') }}">Contents</a></li>
<li><a href="{{ pathto('getting-started') }}">Install</a></li> <li><a href="{{ pathto('getting-started') }}">Install</a></li>
<li><a href="{{ pathto('contents') }}">Contents</a></li>
<li><a href="{{ pathto('reference') }}">Reference</a></li>
<li><a href="{{ pathto('example/index') }}">Examples</a></li> <li><a href="{{ pathto('example/index') }}">Examples</a></li>
<li><a href="{{ pathto('customize') }}">Customize</a></li> <li><a href="{{ pathto('customize') }}">Customize</a></li>
<li><a href="{{ pathto('contact') }}">Contact</a></li>
<li><a href="{{ pathto('talks') }}">Talks/Posts</a></li>
<li><a href="{{ pathto('changelog') }}">Changelog</a></li> <li><a href="{{ pathto('changelog') }}">Changelog</a></li>
<li><a href="{{ pathto('contributing') }}">Contributing</a></li>
<li><a href="{{ pathto('backwards-compatibility') }}">Backwards Compatibility</a></li>
<li><a href="{{ pathto('license') }}">License</a></li> <li><a href="{{ pathto('license') }}">License</a></li>
<li><a href="{{ pathto('contact') }}">Contact Channels</a></li>
</ul> </ul>
{%- if display_toc %} {%- if display_toc %}

View File

@ -1,7 +1,5 @@
<h3>Useful Links</h3> <h3>Useful Links</h3>
<ul> <ul>
<li><a href="{{ pathto('index') }}">The pytest Website</a></li>
<li><a href="{{ pathto('contributing') }}">Contribution Guide</a></li>
<li><a href="https://pypi.python.org/pypi/pytest">pytest @ PyPI</a></li> <li><a href="https://pypi.python.org/pypi/pytest">pytest @ PyPI</a></li>
<li><a href="https://github.com/pytest-dev/pytest/">pytest @ GitHub</a></li> <li><a href="https://github.com/pytest-dev/pytest/">pytest @ GitHub</a></li>
<li><a href="http://plugincompat.herokuapp.com/">3rd party plugins</a></li> <li><a href="http://plugincompat.herokuapp.com/">3rd party plugins</a></li>

View File

@ -6,6 +6,20 @@ Release announcements
:maxdepth: 2 :maxdepth: 2
release-3.5.0
release-3.4.2
release-3.4.1
release-3.4.0
release-3.3.2
release-3.3.1
release-3.3.0
release-3.2.5
release-3.2.4
release-3.2.3
release-3.2.2
release-3.2.1
release-3.2.0
release-3.1.3
release-3.1.2 release-3.1.2
release-3.1.1 release-3.1.1
release-3.1.0 release-3.1.0

View File

@ -62,7 +62,7 @@ holger krekel
- fix issue655: work around different ways that cause python2/3 - fix issue655: work around different ways that cause python2/3
to leak sys.exc_info into fixtures/tests causing failures in 3rd party code to leak sys.exc_info into fixtures/tests causing failures in 3rd party code
- fix issue615: assertion re-writing did not correctly escape % signs - fix issue615: assertion rewriting did not correctly escape % signs
when formatting boolean operations, which tripped over mixing when formatting boolean operations, which tripped over mixing
booleans with modulo operators. Thanks to Tom Viner for the report, booleans with modulo operators. Thanks to Tom Viner for the report,
triaging and fix. triaging and fix.

View File

@ -0,0 +1,23 @@
pytest-3.1.3
=======================================
pytest 3.1.3 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Antoine Legrand
* Bruno Oliveira
* Max Moroz
* Raphael Pierzina
* Ronny Pfannschmidt
* Ryan Fitzpatrick
Happy testing,
The pytest Development Team

View File

@ -0,0 +1,48 @@
pytest-3.2.0
=======================================
The pytest team is proud to announce the 3.2.0 release!
pytest is a mature Python testing tool with more than a 1600 tests
against itself, passing on many different interpreters and platforms.
This release contains a number of bugs fixes and improvements, so users are encouraged
to take a look at the CHANGELOG:
http://doc.pytest.org/en/latest/changelog.html
For complete documentation, please visit:
http://docs.pytest.org
As usual, you can upgrade from pypi via:
pip install -U pytest
Thanks to all who contributed to this release, among them:
* Alex Hartoto
* Andras Tim
* Bruno Oliveira
* Daniel Hahler
* Florian Bruhin
* Floris Bruynooghe
* John Still
* Jordan Moldow
* Kale Kundert
* Lawrence Mitchell
* Llandy Riveron Del Risco
* Maik Figura
* Martin Altmayer
* Mihai Capotă
* Nathaniel Waisbrot
* Nguyễn Hồng Quân
* Pauli Virtanen
* Raphael Pierzina
* Ronny Pfannschmidt
* Segev Finer
* V.Kuznetsov
Happy testing,
The Pytest Development Team

View File

@ -0,0 +1,22 @@
pytest-3.2.1
=======================================
pytest 3.2.1 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Alex Gaynor
* Bruno Oliveira
* Florian Bruhin
* Ronny Pfannschmidt
* Srinivas Reddy Thatiparthy
Happy testing,
The pytest Development Team

View File

@ -0,0 +1,28 @@
pytest-3.2.2
=======================================
pytest 3.2.2 has just been released to PyPI.
This is a bug-fix release, being a drop-in replacement. To upgrade::
pip install --upgrade pytest
The full changelog is available at http://doc.pytest.org/en/latest/changelog.html.
Thanks to all who contributed to this release, among them:
* Andreas Pelme
* Antonio Hidalgo
* Bruno Oliveira
* Felipe Dau
* Fernando Macedo
* Jesús Espino
* Joan Massich
* Joe Talbott
* Kirill Pinchuk
* Ronny Pfannschmidt
* Xuan Luong
Happy testing,
The pytest Development Team

Some files were not shown because too many files have changed in this diff Show More