-
Notifications
You must be signed in to change notification settings - Fork 82
Implement tests on inference scenarios for transforms #1094
Conversation
🚀 Deployed on https://deploy-preview-1094--etna-docs.netlify.app |
Codecov Report
📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more @@ Coverage Diff @@
## inference-v2.1 #1094 +/- ##
=================================================
Coverage ? 86.85%
=================================================
Files ? 164
Lines ? 8944
Branches ? 0
=================================================
Hits ? 7768
Misses ? 1176
Partials ? 0 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
(YeoJohnsonTransform(in_column="target", mode="macro", inplace=False), "regular_ts"), | ||
(YeoJohnsonTransform(in_column="target", mode="macro", inplace=True), "regular_ts"), | ||
# missing_values | ||
# TODO: not working without out column |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
???
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I don't set out_column
then this test fails with error.
transformed_subset_future_df = transformed_subset_future_ts.to_pandas() | ||
assert_frame_equal(transformed_subset_future_df, transformed_future_df.loc[:, pd.IndexSlice[segments, :]]) | ||
|
||
@pytest.mark.parametrize( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have the same parametrization here as in the previous group here? May be we can share the parametrization somehow(i.e. https://stackoverflow.com/questions/51739589/how-to-share-parametrized-arguments-across-multiple-test-functions)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't really think it is a good idea for now, because set of transforms, that works fine is different for each test. May be we can extract only some core, but it looks somewhat artifical.
), | ||
# feature_selection | ||
(FilterFeaturesTransform(exclude=["year"]), "ts_with_exog", {"remove": {"year"}}), | ||
# TODO: this should remove only 2 features |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
May be add link to the issue here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is not issue yet about it.
ts, transform, train_segments=["segment_1", "segment_2"], expected_changes={} | ||
) | ||
|
||
@to_be_fixed(raises=Exception) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we actually fix all these transforms?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At least we can make them fail in understandable way.
ts, transform, train_segments=["segment_1", "segment_2"], expected_changes=expected_changes | ||
) | ||
|
||
@to_be_fixed(raises=NotImplementedError, match="Per-segment transforms can't work on new segments") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why this test is separate from the others?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because it in to_be_fixed
section. This can be done in theory, but we fail this test.
For instance, for decomposition transforms we can't do it in theory because we fit separate model for each segment.
(MedianOutliersTransform(in_column="target"), "ts_with_outliers", {}), | ||
(PredictionIntervalOutliersTransform(in_column="target", model=ProphetModel), "ts_with_outliers", {}), | ||
# math | ||
# TODO: error should be understandable, not like now |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
???
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It fails we very vague error and it is not really clear what happenned. I think may be we should raise some error that can be understood by the user.
self._test_transform_future_without_target(ts, transform, expected_changes=expected_changes) | ||
|
||
|
||
# TODO: нам ведь еще и тесты на inverse-transform нужны по идее |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
???
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will delete this line.
Before submitting (must do checklist)
Proposed Changes
Look at #1077.
Closing issues
Closes #1077.