Hello.
In psutil tests I have the following idiom:
import psutil, pytest, traceback
try:
HAS_CPU_FREQ = hasattr(psutil, "cpu_freq") and bool(psutil.cpu_freq())
except Exception:
traceback.print_exc()
HAS_CPU_FREQ = False
class TestCase:
@pytest.mark.skipif(not HAS_CPU_FREQ, reason="not supported")
def test_cpu_freq(self):
...
Assuming this makes sense in the first place, would it make sense to give the ability to schedule a failure occurring at import time, so that it gets reported when the test run has finished? Something like:
pytest.fail_atexit(reason) # runs at the end of the cleaning run.
For the record, right now I do:
try:
HAS_CPU_FREQ = hasattr(psutil, "cpu_freq") and bool(psutil.cpu_fre())
except Exception: # noqa: BLE001
atexit.register(functools.partial(print, traceback.format_exc()))
HAS_CPU_FREQ = False
This at least prints a traceback on exit [1], so I will generally notice, but what would be needed here is an actual failure .
[1] unless I run tests in parallel, in which case the stdout is suppressed
Hello.
In psutil tests I have the following idiom:
Assuming this makes sense in the first place, would it make sense to give the ability to schedule a failure occurring at import time, so that it gets reported when the test run has finished? Something like:
For the record, right now I do:
This at least prints a traceback on exit [1], so I will generally notice, but what would be needed here is an actual failure .
[1] unless I run tests in parallel, in which case the stdout is suppressed