-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add one second test check to CI #1033
base: main
Are you sure you want to change the base?
add one second test check to CI #1033
Conversation
d3e68ee
to
deed495
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, thank you. I think once we merge into main it will pass again as I moved the CLI test in #1000. Only issue is that it now looks like the tests are being run twice? The logs are printing results twice and if you compare the time on https://github.com/DiamondLightSource/dodal/actions/runs/13132250037/job/36639728998 to https://github.com/DiamondLightSource/dodal/actions/runs/13133823256/job/36644639445?pr=1033 its ~2x the time.
looking into it now update: noticed where is this happening |
519fb11
to
8535a08
Compare
reproducing lcoally: 1027 passed, 4 skipped, 40 deselected in 31.79s ==================================================================
_____________________________________________________________________________________ summary ______________________________________________________________________________________
report: commands succeeded
congratulations :)
beginning to suspect that the culprit is pytest that runs twice due to a request to have both coverage and report json for time this line only occurs once in this run https://github.com/DiamondLightSource/dodal/actions/runs/13135068486/job/36648559977?pr=1033 reading the docs... |
tried to replicate time doubling on a smaller scale and the undesired doubling does not happen |
should I exclude this? |
it's possible that addition of dev-dependency update: the cov.json does not cover time therefore the pytest-json-report is good actually |
No, as I said above, it will be fixed if you merge in |
I thought that I had already merged in |
90c7aa7
to
229f472
Compare
Ah, my bad. I think that the solution is that we need different thresholds for system tests vs unit tests. Can we set like a 5s limit on system tests or we can just ignore if they go over the duration, I don't mind which. |
making it 5 s for system tests |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1033 +/- ##
===========================================
- Coverage 97.62% 28.58% -69.05%
===========================================
Files 159 143 -16
Lines 6581 6069 -512
===========================================
- Hits 6425 1735 -4690
- Misses 156 4334 +4178 ☔ View full report in Codecov by Sentry. |
this seems to work now @DominicOram |
coverage took a nosedive though |
Fixes #1003
Instructions to reviewer on how to test:
print("✅ All tests ran within the acceptable time limit.")
orprint("❌ The following tests exceeded the 1s threshold:")
happenedChecks for reviewer
dodal connect ${BEAMLINE}