eval() is used for evaluating string conditions in skipif/xfail e.g.
@pytest.mark.skipif("1 == 0")
This is the only code that uses `_pytest._code.compile()`, so removing
its last use enables us to remove it entirely.
In this case it doesn't add much. Plain compile() gives a good enough
error message.
For regular exceptions, the message is the same.
For SyntaxError exceptions, e.g. "1 ==", the previous code adds a little
bit of useful context:
```
invalid syntax (skipping.py:108>, line 1)
The above exception was the direct cause of the following exception:
1 ==
^
(code was compiled probably from here: <0-codegen /pytest/src/_pytest/skipping.py:108>) (line 1)
During handling of the above exception, another exception occurred:
Error evaluating 'skipif' condition
1 ==
^
SyntaxError: invalid syntax
```
The new code loses it:
```
unexpected EOF while parsing (<skipif condition>, line 1)
During handling of the above exception, another exception occurred:
Error evaluating 'skipif' condition
1 ==
^
SyntaxError: invalid syntax
```
Since the old message is a minor improvement to an unlikely error
condition in a deprecated feature, I think it is not worth all the code
that it requires.
This has been there since as far as the git history goes (2007), is not
covered by any test, and says "Buggy python version consider upgrading".
Hopefully everyone have upgraded...
Setting log_level via the CLI or .ini will control the log level of the
report that is dumped upon failure of a test.
If caplog modified the log level during the execution of that test, it
should not impact the level that is displayed upon failure in the
"captured log report" section.
[
ran:
- rebased
- reused handler
- changed store keys also to "caplog_handler_*"
- added changelog
all bugs are mine :)
]
There is no need to do the XPASS check here, pytest_runtest_makereport
already handled that (the current handling there is dead code).
All the hook needs to do is refresh the xfail evaluation if needed, and
check the NOTRUN condition again.
Previously, skipif/xfail marks were evaluated using a `MarkEvaluator`
class. I found this class very difficult to understand.
Instead of `MarkEvaluator`, rewrite using straight functions which are
hopefully easier to follow.
I tried to keep the semantics exactly as before, except improving a few
error messages.
This type was actually in `_pytest.skipping` previously, but was moved to
`_pytest.mark.evaluate` in cf40c0743c.
I think the previous location was more appropriate, because the
`MarkEvaluator` is not a generic mark facility, it is explicitly and
exclusively used by the `skipif` and `xfail` marks to evaluate their
particular set of arguments. So it is better to put it in the plugin
code.
Putting `skipping` related functionality into the core `_pytest.mark`
module also causes some import cycles which we can avoid.
`@pytest.mark.xfail` is meant to work with arbitrary items, and there is
a test `test_mark_xfail_item` which verifies this.
However, the code for some reason uses `pytest_pyfunc_call` for the
call phase check, which only works for Function items. The test
mentioned above only passed "accidentally" because the
`pytest_runtest_makereport` hook also runs a `evalxfail.istrue()` which
triggers and evaluation, but conceptually it shouldn't do that.
Change to `pytest_runtest_call` to make the xfail checking properly
generic.
While working on improving the documentation of the
`pytest_runtest_setup` hook, I came up with this text:
> Called to perform the setup phase of the test item.
>
> The default implementation runs ``setup()`` on item and all of its
> parents (which haven't been setup yet). This includes obtaining the
> values of fixtures required by the item (which haven't been obtained
> yet).
But upon closer inspection I noticed this line at the start of
`SetupState.prepare` (which is what does the actual work for
`pytest_runtest_setup`):
self._teardown_towards(needed_collectors)
which implies that the setup phase of one item might trigger teardowns
of *previous* items. This complicates the simple explanation. It also
seems like a completely undesirable thing to do, because it breaks
isolation between tests -- e.g. a failed teardown of one item shouldn't
cause the failure of some other items just because it happens to run
after it.
So the first thing I tried was to remove that line and see if anything
breaks -- nothing did. At least pytest's own test suite runs fine. So
maybe it's just dead code?