-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Splits integration / examples / unit tests #204
Conversation
…l scheduled workflow trigger, add test_parameterisation cost/optimiser test matrix
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## develop #204 +/- ##
===========================================
- Coverage 94.70% 94.29% -0.41%
===========================================
Files 44 32 -12
Lines 1887 1614 -273
===========================================
- Hits 1787 1522 -265
+ Misses 100 92 -8 ☔ View full report in Codecov by Sentry. |
…s, automate the skip marker procedure
@NicolaCourtier @martinjrobins this PR splits the tests into Unforunately, we have been relying on examples to boost our coverage which has resulted in the large drop we are experiencing. I'm keen to move away from using examples as we shouldn't include |
Hi @BradyPlanden, quick question- so, which tests are still included in the coverage? When is |
The coverage tests are defined in the noxfile located here. This runs the unit, integration, and plots tests as per below. def coverage(session):
session.install("-e", ".[all,dev]", silent=False)
if PYBOP_SCHEDULED:
session.run("pip", "install", f"pybamm=={PYBAMM_VERSION}", silent=False)
session.run(
"pytest",
"--unit",
"--integration",
"--plots",
"--cov",
"--cov-report=xml",
) The current state of this PR doesn't have the examples running within the github actions, but I think adding them to the |
If |
Unforunately, I don't think it would. It would ensure that we test the examples, but the current coverage values are inflated by including these example tests. While the examples excerise our code base, we don't have any tests on them, so it only tells us that they don't throw errors. The Given the above, two potential options are:
My preference would be the latter option, as adding all the tests to this PR would be quite combersome. Although, perhaps I'm missing an option! Open to other suggestions. |
I think it's important we keep testing whether the examples throw any errors,. My preference would be to include test_examples in the coverage for now and merge this PR, and then update the tests and check coverage in a separate PR. |
… coverage to use parallel workers where possible
we can separate these two things though. Keep testing the examples, and just exclude the examples from the coverage reports? I think that including examples in coverage is a bit misleading as we're not really testing all the code that the examples run. Happy to take a coverage hit on this PR as long as we're adding more tests in subseqent PRs. |
Excellent, @NicolaCourtier and I had a chat offline and we are both onboard with moving this forward andcmaking up the coverage in follow on PR's. Thanks for the discussion everyone! @agriyakhetarpal, this is now ready for a review when you get a chance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @BradyPlanden! I agree with Martin, the examples tests should be best placed out of the coverage report, since the rest of the tests should take care of that.
I will also suggest taking advantage of the recent Python 3.12 functionalities that benefit the coverage
module: see python-pillow/Pillow#7820 for an example and the coverage
module's release notes. This is experimental for now, but it does not hurt to try this out if the coverage tests will be slightly breezier!
Co-authored-by: Agriya Khetarpal <[email protected]>
Co-authored-by: Agriya Khetarpal <[email protected]>
Co-authored-by: Agriya Khetarpal <[email protected]>
Ready for another look @agriyakhetarpal. Let's take a look at the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @BradyPlanden, these changes are all good to me now!
* Create test_plots.py * Parametrize test_models * Add check on n_states * Add test for json parameter set * Add test for json export * Add test for invalid max values * Add optimisation tests * Add observer tests * Add tests on observer evaluate * Add tests on invalid parameter inputs * Add invalid sample size tests
This PR adds the following to close #139,
It also,
max_unchanged_iterations
from 25 to 5. This value was selected as a trade-off between robustness and performance, open to other defaults however._evaulateS1
to theRootMeanSquared
cost function