-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add CI job that fails PRs if tests are taking >1s #663
Comments
@callumforrester, thoughts on doing something similar for |
I like this idea as long as it's reliable, not sure how consistent CI CPU time is, does it vary a lot depending on cloud load? If so, we could actually profile the tests. But regardless I'm keen on this. |
Added DiamondLightSource/dodal#1003. Anecdotally I think the test timings are pretty reliable. I think the best way to confirm this though is just implement the check and see how many false positives we get |
For both this and issues #633, I often wish we could have a "soft no" or "advisory no" from CI |
Nah, "soft no" will immediately get ignored and become "yes" |
It's less an issue here, but for #633 I don't think the PVs being absent because of a power-down, or the device being unplugged for a service, should block merge. |
All unit tests should be fast running and so not take >1s each. If they are taking longer it is most likely due to an issue with poor mocking. To make sure we enforce this we should add a CI check on a PR that confirms that no tests individually took too long.
The durations can be found with
pytest -m "not s03" --durations=0
but this isn't very machine readable so it may be worth a look around to see if we can get them out in a nicer wayAcceptance Criteria
The text was updated successfully, but these errors were encountered: