-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Organise unit tests to use pytest fixtures #172
Conversation
Codecov Report
@@ Coverage Diff @@
## main #172 +/- ##
==========================================
- Coverage 88.58% 87.10% -1.48%
==========================================
Files 26 25 -1
Lines 1603 1613 +10
==========================================
- Hits 1420 1405 -15
- Misses 183 208 +25
|
I have reorganised a few tests files just to see what it looks like. @ccarouge @bschroeter it would be great if I can get your thoughts before I push on further. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it looks better than before.
But considering the amount of changes, do we have to do all the tests in the same PR? It might make it very very hard to review. Could we do one module per PR or only a couple at a time?
tests/test_comparison.py
Outdated
) | ||
@pytest.fixture | ||
def bitwise_cmp_dir(self, mock_cwd): | ||
_bitwise_cmp_dir = mock_cwd / internal.FLUXSITE_BITWISE_CMP_DIR |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the mock_cwd
fixture, it means you can use a relative path here (internal.FLUXSITE_BITWISE_CMP_DIR only). It might lighten up the tests. Might be applicable elsewhere.
tests/test_comparison.py
Outdated
comparison_task.run() | ||
assert f"nccmp -df {file_a} {file_b}" in mock_subprocess_handler.commands | ||
|
||
def test_default_standard_output(self, comparison_task, files): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we could instead use the test parameterization to write one output test that runs on all output cases. But this might wait for after the reorganisation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be possible to use parameterisation for testing different verbosity levels. I'll look into it
I think I prefer we do the merge into main as a single PR. But we can treat this branch as 'main' for this issue and create other branches that implement small changes and have those merged into this branch? |
Yes, that would be better! |
|
||
|
||
@pytest.fixture | ||
def config(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I am going to approve this section, but bearing in mind that some of the changes I've made to the configuration validation system (which now has the bonus feature of installing a data directory [which contains tests]) have simplified this process significantly.
I'll have a fun merge on my hands after this one...
30f4ce3
to
19eff9e
Compare
@ccarouge I've rebased this branch off main so that we have the recently merged changes (e.g. linting with ruff). I've added the changes you requested for using pytest's |
@SeanBryan51 It seems you asked my review again on this meta-PR. Is that correct? |
c03e814
to
eaa440b
Compare
Unit tests have an excessive amount of repeated test setup code, making unit tests harder to maintain and read. Using pytest fixtures to setup each test would remove needless code repetition and also prevents the possibility of state leaking into each test case. Organise the tests so that for each function `func`, we have a test class `TestFunc` with each method of `TestFunc` containing a single test (success or failure case). Use pytest's `parametrize()` feature for testing different levels of verbosity. Remove usage of `mock_cwd` fixture unless it is required for a test to pass. Fixes #163
9d88427
to
107cc86
Compare
Unit tests have an excessive amount of repeated test setup code, making unit tests harder to maintain and read. Using pytest fixtures to setup each test would remove needless code repetition and also prevents the possibility of state leaking into each test case.
Organise the tests so that for each function
func
, we have a test classTestFunc
with each method ofTestFunc
containing a single test (success or failure case).Use pytest's
parametrize()
feature for testing different levels of verbosity.Remove usage of
mock_cwd
fixture unless it is required for a test to pass.Fixes #163