Merge pull request #11835 from pytest-dev/release-8.0.0rc2
Prepare release version 8.0.0rc2
(cherry picked from commit 97960bdd14)
			
			
This commit is contained in:
		
							parent
							
								
									5cd0535395
								
							
						
					
					
						commit
						ca5bbd0a9f
					
				|  | @ -1,5 +0,0 @@ | ||||||
| Improvements to how ``-r`` for xfailures and xpasses: |  | ||||||
| 
 |  | ||||||
| * Report tracebacks for xfailures when ``-rx`` is set. |  | ||||||
| * Report captured output for xpasses when ``-rX`` is set. |  | ||||||
| * For xpasses, add ``-`` in summary between test name and reason, to match how xfail is displayed. |  | ||||||
|  | @ -1 +0,0 @@ | ||||||
| Fix reporting of teardown errors in higher-scoped fixtures when using `--maxfail` or `--stepwise`. |  | ||||||
|  | @ -1,2 +0,0 @@ | ||||||
| Fixed ``IndexError: string index out of range`` crash in ``if highlighted[-1] == "\n" and source[-1] != "\n"``. |  | ||||||
| This bug was introduced in pytest 8.0.0rc1. |  | ||||||
|  | @ -1 +0,0 @@ | ||||||
| The :hook:`pytest_plugin_registered` hook has a new ``plugin_name`` parameter containing the name by which ``plugin`` is registered. |  | ||||||
|  | @ -1,3 +0,0 @@ | ||||||
| Fixed a frustrating bug that afflicted some users with the only error being ``assert mod not in mods``. The issue was caused by the fact that ``str(Path(mod))`` and ``mod.__file__`` don't necessarily produce the same string, and was being erroneously used interchangably in some places in the code. |  | ||||||
| 
 |  | ||||||
| This fix also broke the internal API of ``PytestPluginManager.consider_conftest`` by introducing a new parameter -- we mention this in case it is being used by external code, even if marked as *private*. |  | ||||||
|  | @ -6,6 +6,7 @@ Release announcements | ||||||
|    :maxdepth: 2 |    :maxdepth: 2 | ||||||
| 
 | 
 | ||||||
| 
 | 
 | ||||||
|  |    release-8.0.0rc2 | ||||||
|    release-8.0.0rc1 |    release-8.0.0rc1 | ||||||
|    release-7.4.4 |    release-7.4.4 | ||||||
|    release-7.4.3 |    release-7.4.3 | ||||||
|  |  | ||||||
|  | @ -0,0 +1,32 @@ | ||||||
|  | pytest-8.0.0rc2 | ||||||
|  | ======================================= | ||||||
|  | 
 | ||||||
|  | The pytest team is proud to announce the 8.0.0rc2 prerelease! | ||||||
|  | 
 | ||||||
|  | This is a prerelease, not intended for production use, but to test the upcoming features and improvements | ||||||
|  | in order to catch any major problems before the final version is released to the major public. | ||||||
|  | 
 | ||||||
|  | We appreciate your help testing this out before the final release, making sure to report any | ||||||
|  | regressions to our issue tracker: | ||||||
|  | 
 | ||||||
|  | https://github.com/pytest-dev/pytest/issues | ||||||
|  | 
 | ||||||
|  | When doing so, please include the string ``[prerelease]`` in the title. | ||||||
|  | 
 | ||||||
|  | You can upgrade from PyPI via: | ||||||
|  | 
 | ||||||
|  |     pip install pytest==8.0.0rc2 | ||||||
|  | 
 | ||||||
|  | Users are encouraged to take a look at the CHANGELOG carefully: | ||||||
|  | 
 | ||||||
|  |     https://docs.pytest.org/en/release-8.0.0rc2/changelog.html | ||||||
|  | 
 | ||||||
|  | Thanks to all the contributors to this release: | ||||||
|  | 
 | ||||||
|  | * Ben Brown | ||||||
|  | * Bruno Oliveira | ||||||
|  | * Ran Benita | ||||||
|  | 
 | ||||||
|  | 
 | ||||||
|  | Happy testing, | ||||||
|  | The pytest Development Team | ||||||
|  | @ -28,6 +28,37 @@ with advance notice in the **Deprecations** section of releases. | ||||||
| 
 | 
 | ||||||
| .. towncrier release notes start | .. towncrier release notes start | ||||||
| 
 | 
 | ||||||
|  | pytest 8.0.0rc2 (2024-01-17) | ||||||
|  | ============================ | ||||||
|  | 
 | ||||||
|  | 
 | ||||||
|  | Improvements | ||||||
|  | ------------ | ||||||
|  | 
 | ||||||
|  | - `#11233 <https://github.com/pytest-dev/pytest/issues/11233>`_: Improvements to ``-r`` for xfailures and xpasses: | ||||||
|  | 
 | ||||||
|  |   * Report tracebacks for xfailures when ``-rx`` is set. | ||||||
|  |   * Report captured output for xpasses when ``-rX`` is set. | ||||||
|  |   * For xpasses, add ``-`` in summary between test name and reason, to match how xfail is displayed. | ||||||
|  | 
 | ||||||
|  | - `#11825 <https://github.com/pytest-dev/pytest/issues/11825>`_: The :hook:`pytest_plugin_registered` hook has a new ``plugin_name`` parameter containing the name by which ``plugin`` is registered. | ||||||
|  | 
 | ||||||
|  | 
 | ||||||
|  | Bug Fixes | ||||||
|  | --------- | ||||||
|  | 
 | ||||||
|  | - `#11706 <https://github.com/pytest-dev/pytest/issues/11706>`_: Fix reporting of teardown errors in higher-scoped fixtures when using `--maxfail` or `--stepwise`. | ||||||
|  | 
 | ||||||
|  | 
 | ||||||
|  | - `#11758 <https://github.com/pytest-dev/pytest/issues/11758>`_: Fixed ``IndexError: string index out of range`` crash in ``if highlighted[-1] == "\n" and source[-1] != "\n"``. | ||||||
|  |   This bug was introduced in pytest 8.0.0rc1. | ||||||
|  | 
 | ||||||
|  | 
 | ||||||
|  | - `#9765 <https://github.com/pytest-dev/pytest/issues/9765>`_, `#11816 <https://github.com/pytest-dev/pytest/issues/11816>`_: Fixed a frustrating bug that afflicted some users with the only error being ``assert mod not in mods``. The issue was caused by the fact that ``str(Path(mod))`` and ``mod.__file__`` don't necessarily produce the same string, and was being erroneously used interchangably in some places in the code. | ||||||
|  | 
 | ||||||
|  |   This fix also broke the internal API of ``PytestPluginManager.consider_conftest`` by introducing a new parameter -- we mention this in case it is being used by external code, even if marked as *private*. | ||||||
|  | 
 | ||||||
|  | 
 | ||||||
| pytest 8.0.0rc1 (2023-12-30) | pytest 8.0.0rc1 (2023-12-30) | ||||||
| ============================ | ============================ | ||||||
| 
 | 
 | ||||||
|  |  | ||||||
|  | @ -162,7 +162,7 @@ objects, they are still using the default pytest representation: | ||||||
|     rootdir: /home/sweet/project |     rootdir: /home/sweet/project | ||||||
|     collected 8 items |     collected 8 items | ||||||
| 
 | 
 | ||||||
|     <Dir parametrize.rst-189> |     <Dir parametrize.rst-192> | ||||||
|       <Module test_time.py> |       <Module test_time.py> | ||||||
|         <Function test_timedistance_v0[a0-b0-expected0]> |         <Function test_timedistance_v0[a0-b0-expected0]> | ||||||
|         <Function test_timedistance_v0[a1-b1-expected1]> |         <Function test_timedistance_v0[a1-b1-expected1]> | ||||||
|  | @ -239,7 +239,7 @@ If you just collect tests you'll also nicely see 'advanced' and 'basic' as varia | ||||||
|     rootdir: /home/sweet/project |     rootdir: /home/sweet/project | ||||||
|     collected 4 items |     collected 4 items | ||||||
| 
 | 
 | ||||||
|     <Dir parametrize.rst-189> |     <Dir parametrize.rst-192> | ||||||
|       <Module test_scenarios.py> |       <Module test_scenarios.py> | ||||||
|         <Class TestSampleWithScenarios> |         <Class TestSampleWithScenarios> | ||||||
|           <Function test_demo1[basic]> |           <Function test_demo1[basic]> | ||||||
|  | @ -318,7 +318,7 @@ Let's first see how it looks like at collection time: | ||||||
|     rootdir: /home/sweet/project |     rootdir: /home/sweet/project | ||||||
|     collected 2 items |     collected 2 items | ||||||
| 
 | 
 | ||||||
|     <Dir parametrize.rst-189> |     <Dir parametrize.rst-192> | ||||||
|       <Module test_backends.py> |       <Module test_backends.py> | ||||||
|         <Function test_db_initialized[d1]> |         <Function test_db_initialized[d1]> | ||||||
|         <Function test_db_initialized[d2]> |         <Function test_db_initialized[d2]> | ||||||
|  | @ -503,10 +503,10 @@ Running it results in some skips if we don't have all the python interpreters in | ||||||
| .. code-block:: pytest | .. code-block:: pytest | ||||||
| 
 | 
 | ||||||
|    . $ pytest -rs -q multipython.py |    . $ pytest -rs -q multipython.py | ||||||
|    ssssssssssss...ssssssssssss                                          [100%] |    ssssssssssssssssssssssss...                                          [100%] | ||||||
|    ========================= short test summary info ========================== |    ========================= short test summary info ========================== | ||||||
|    SKIPPED [12] multipython.py:68: 'python3.9' not found |    SKIPPED [12] multipython.py:68: 'python3.9' not found | ||||||
|    SKIPPED [12] multipython.py:68: 'python3.11' not found |    SKIPPED [12] multipython.py:68: 'python3.10' not found | ||||||
|    3 passed, 24 skipped in 0.12s |    3 passed, 24 skipped in 0.12s | ||||||
| 
 | 
 | ||||||
| Parametrization of optional implementations/imports | Parametrization of optional implementations/imports | ||||||
|  |  | ||||||
|  | @ -152,7 +152,7 @@ The test collection would look like this: | ||||||
|     configfile: pytest.ini |     configfile: pytest.ini | ||||||
|     collected 2 items |     collected 2 items | ||||||
| 
 | 
 | ||||||
|     <Dir pythoncollection.rst-190> |     <Dir pythoncollection.rst-193> | ||||||
|       <Module check_myapp.py> |       <Module check_myapp.py> | ||||||
|         <Class CheckMyApp> |         <Class CheckMyApp> | ||||||
|           <Function simple_check> |           <Function simple_check> | ||||||
|  | @ -215,7 +215,7 @@ You can always peek at the collection tree without running tests like this: | ||||||
|     configfile: pytest.ini |     configfile: pytest.ini | ||||||
|     collected 3 items |     collected 3 items | ||||||
| 
 | 
 | ||||||
|     <Dir pythoncollection.rst-190> |     <Dir pythoncollection.rst-193> | ||||||
|       <Dir CWD> |       <Dir CWD> | ||||||
|         <Module pythoncollection.py> |         <Module pythoncollection.py> | ||||||
|           <Function test_function> |           <Function test_function> | ||||||
|  |  | ||||||
|  | @ -660,6 +660,31 @@ If we run this: | ||||||
|     E       assert 0 |     E       assert 0 | ||||||
| 
 | 
 | ||||||
|     test_step.py:11: AssertionError |     test_step.py:11: AssertionError | ||||||
|  |     ================================ XFAILURES ================================= | ||||||
|  |     ______________________ TestUserHandling.test_deletion ______________________ | ||||||
|  | 
 | ||||||
|  |     item = <Function test_deletion> | ||||||
|  | 
 | ||||||
|  |         def pytest_runtest_setup(item): | ||||||
|  |             if "incremental" in item.keywords: | ||||||
|  |                 # retrieve the class name of the test | ||||||
|  |                 cls_name = str(item.cls) | ||||||
|  |                 # check if a previous test has failed for this class | ||||||
|  |                 if cls_name in _test_failed_incremental: | ||||||
|  |                     # retrieve the index of the test (if parametrize is used in combination with incremental) | ||||||
|  |                     parametrize_index = ( | ||||||
|  |                         tuple(item.callspec.indices.values()) | ||||||
|  |                         if hasattr(item, "callspec") | ||||||
|  |                         else () | ||||||
|  |                     ) | ||||||
|  |                     # retrieve the name of the first test function to fail for this class name and index | ||||||
|  |                     test_name = _test_failed_incremental[cls_name].get(parametrize_index, None) | ||||||
|  |                     # if name found, test has failed for the combination of class name & test name | ||||||
|  |                     if test_name is not None: | ||||||
|  |     >                   pytest.xfail(f"previous test failed ({test_name})") | ||||||
|  |     E                   _pytest.outcomes.XFailed: previous test failed (test_modification) | ||||||
|  | 
 | ||||||
|  |     conftest.py:47: XFailed | ||||||
|     ========================= short test summary info ========================== |     ========================= short test summary info ========================== | ||||||
|     XFAIL test_step.py::TestUserHandling::test_deletion - reason: previous test failed (test_modification) |     XFAIL test_step.py::TestUserHandling::test_deletion - reason: previous test failed (test_modification) | ||||||
|     ================== 1 failed, 2 passed, 1 xfailed in 0.12s ================== |     ================== 1 failed, 2 passed, 1 xfailed in 0.12s ================== | ||||||
|  |  | ||||||
|  | @ -22,7 +22,7 @@ Install ``pytest`` | ||||||
| .. code-block:: bash | .. code-block:: bash | ||||||
| 
 | 
 | ||||||
|     $ pytest --version |     $ pytest --version | ||||||
|     pytest 8.0.0rc1 |     pytest 8.0.0rc2 | ||||||
| 
 | 
 | ||||||
| .. _`simpletest`: | .. _`simpletest`: | ||||||
| 
 | 
 | ||||||
|  |  | ||||||
|  | @ -1418,7 +1418,7 @@ Running the above tests results in the following test IDs being used: | ||||||
|    rootdir: /home/sweet/project |    rootdir: /home/sweet/project | ||||||
|    collected 12 items |    collected 12 items | ||||||
| 
 | 
 | ||||||
|    <Dir fixtures.rst-208> |    <Dir fixtures.rst-211> | ||||||
|      <Module test_anothersmtp.py> |      <Module test_anothersmtp.py> | ||||||
|        <Function test_showhelo[smtp.gmail.com]> |        <Function test_showhelo[smtp.gmail.com]> | ||||||
|        <Function test_showhelo[mail.python.org]> |        <Function test_showhelo[mail.python.org]> | ||||||
|  |  | ||||||
|  | @ -404,10 +404,19 @@ Example: | ||||||
|     E       assert 0 |     E       assert 0 | ||||||
| 
 | 
 | ||||||
|     test_example.py:14: AssertionError |     test_example.py:14: AssertionError | ||||||
|  |     ================================ XFAILURES ================================= | ||||||
|  |     ________________________________ test_xfail ________________________________ | ||||||
|  | 
 | ||||||
|  |         def test_xfail(): | ||||||
|  |     >       pytest.xfail("xfailing this test") | ||||||
|  |     E       _pytest.outcomes.XFailed: xfailing this test | ||||||
|  | 
 | ||||||
|  |     test_example.py:26: XFailed | ||||||
|  |     ================================= XPASSES ================================== | ||||||
|     ========================= short test summary info ========================== |     ========================= short test summary info ========================== | ||||||
|     SKIPPED [1] test_example.py:22: skipping this test |     SKIPPED [1] test_example.py:22: skipping this test | ||||||
|     XFAIL test_example.py::test_xfail - reason: xfailing this test |     XFAIL test_example.py::test_xfail - reason: xfailing this test | ||||||
|     XPASS test_example.py::test_xpass always xfail |     XPASS test_example.py::test_xpass - always xfail | ||||||
|     ERROR test_example.py::test_error - assert 0 |     ERROR test_example.py::test_error - assert 0 | ||||||
|     FAILED test_example.py::test_fail - assert 0 |     FAILED test_example.py::test_fail - assert 0 | ||||||
|     == 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s === |     == 1 failed, 1 passed, 1 skipped, 1 xfailed, 1 xpassed, 1 error in 0.12s === | ||||||
|  |  | ||||||
		Loading…
	
		Reference in New Issue