-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Report timeouts with a TIMEOUT status instead of FAILED #87
Comments
A custom status could be nice. Probably should have a better mechanism to convey the test status instead of the current grepping for What other implications does introducing a new status have? You mention something about durations? How are other ways we could be accidentally break test suites by introducing this? Would the suite still fail if there's a timeout? |
The up-to-date and original shots I had print
I am not sure if that has the capability to break something, but I don't think so. The category returned in case of a timeout is
Yep, I got confused with the
Looks a lot nicer in a proper terminal of course, this is the output from our CI setup.
This could of course break test suites if exceptions occur when running those functions. I tried to make everything as exception-proof as in
All of those possible issues are either breaking When printing the timing information is too verbose or too far off from the current
Yes they do still fail. At least I haven't seen a test suite that timed out and got an overall PASS. |
I threw this at one of my personal projects and forced some timeouts:
The first timeout occurs during the traceback______________________________________________________________________________________ test_unit_system _______________________________________________________________________________________ cls = , func = . at 0x7f9a6e854ee0>, when = 'teardown' reraise = (, ) @classmethod def from_call( cls, func: "Callable[[], TResult]", when: "Literal['collect', 'setup', 'call', 'teardown']", reraise: Optional[ Union[Type[BaseException], Tuple[Type[BaseException], ...]] ] = None, ) -> "CallInfo[TResult]": excinfo = None start = timing.time() precise_start = timing.perf_counter() try: > result: Optional[TResult] = func() .tox/py39/lib/python3.9/site-packages/_pytest/runner.py:311: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ .tox/py39/lib/python3.9/site-packages/_pytest/runner.py:255: in lambda: ihook(item=item, **kwds), when=when, reraise=reraise .tox/py39/lib/python3.9/site-packages/pluggy/hooks.py:286: in __call__ return self._hookexec(self, self.get_hookimpls(), kwargs) .tox/py39/lib/python3.9/site-packages/pluggy/manager.py:93: in _hookexec return self._inner_hookexec(hook, methods, kwargs) .tox/py39/lib/python3.9/site-packages/pluggy/manager.py:84: in self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall( .tox/py39/lib/python3.9/site-packages/_pytest/runner.py:175: in pytest_runtest_teardown item.session._setupstate.teardown_exact(item, nextitem) .tox/py39/lib/python3.9/site-packages/_pytest/runner.py:419: in teardown_exact self._teardown_towards(needed_collectors) .tox/py39/lib/python3.9/site-packages/_pytest/runner.py:434: in _teardown_towards raise exc .tox/py39/lib/python3.9/site-packages/_pytest/runner.py:427: in _teardown_towards self._pop_and_teardown() .tox/py39/lib/python3.9/site-packages/_pytest/runner.py:387: in _pop_and_teardown self._teardown_with_finalization(colitem) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <_pytest.runner.SetupState object at 0x7f9a8f025e50>, colitem = def _teardown_with_finalization(self, colitem) -> None: self._callfinalizers(colitem) colitem.teardown() for colitem in self._finalizers: > assert colitem in self.stack E AssertionErrorAll the other failed tests have the exact same traceback.
With that single timeout, the test suite still fails:
Note that the weird issue with the timeout happening during teardown came from a single lucky run. Other exceptions might or might not be thrown when repeatedly running them with |
I forked this repo to https://github.com/ep12/pytest-timeout/tree/timing_enhancements and added some commits to it that contain everything that would be helpful for qtile/qtile. Example output looks like
see https://github.com/ep12/qtile/tree/time_ci_runs_pytest_timeout_upstream for the options and check the github actions CI logs for more output. |
Hello again! I just want to point you to my comment in the corresponding qtile issue message thread. It looks like the method |
@flub or anyone else: Has anyone had a look at this? I think at least the first two commits (d827511 and b7f37f8) would be nice to have. The second commit adds some functionality such that the third commit could even partially be realised in |
Hey, sorry I don't have a lot of time to spend on pytest-timeout so if no one else offers reviews I'm afraid my rather sporadic looks at pytest-timeout is all there is. Anyway, having a quick look at those two commits they do indeed look interesting features. Would you like to open, preferably separate, PRs for them? |
Nothing to apologise for, really! We all have a lot of stuff to do... I'll make those pr's, hopefully relatively soon... Can't promise it though... I'll probably have to look a bit more at the project structure to write the docs and tests if I can, so that might take a bit... When everything is ready for review I'll mention you so you get notified then. |
This looks like what I want. I hope it to print TIMEOUT if it failed because of a timeout. so how is it going now? |
Still in progress at the moment... |
I am the author of PR qtile/qtile#2331 and want to add
pytest-timeout
to the qtile test suite.The change I made to the
conftest.py
was to implement a TIMEOUT status plus inline timings to make it more obvious which tests fail and which ones time out. Two of the maintainers commented that they'd rather prefer it to have this functionality built intopytest-timeout
and I agreed. So my shot at it is here and we'd love to get some feedback if something like that could be added topytest-timeout
.Example output, luckily without `TIMEOUT` or `FAILED (...)`:
The text was updated successfully, but these errors were encountered: