-
Notifications
You must be signed in to change notification settings - Fork 917
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
return metric score along with untrained best models & params #822
return metric score along with untrained best models & params #822
Conversation
Codecov Report
@@ Coverage Diff @@
## master #822 +/- ##
=======================================
Coverage 91.33% 91.33%
=======================================
Files 69 69
Lines 6867 6867
=======================================
Hits 6272 6272
Misses 595 595
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a breaking change, but I think we can do it. The other option would be to introduce a parameter in the function to optionally return the best metric, but that would inflate the signature. So I think we could merge this change. Could you maybe just add a short unit test that verifies the actual value being returned in test_backtesting.py
?
There is a function Another approach is to leave this as-is, and create another test where I also manually backtest the best model, and compare the score with the one returned by gridsearch function. |
Out of curiosity, there are some tests that you mark as skipUnless TORCH_AVAILABLE. I don't recall models such as RandomForest from sklearn requires PyTorch. |
I think in case of doubt it is better to create a new unit test dedicated to test this new functionality. My only concern is that ideally it shouldn't be too long-running; so maybe take a small series (e.g. air passengers), a model very quick to fit (e.g. Theta), and not too large a parameter space. |
RandomForest does not depend on PyTorch, but it used to (we used to rely on PyTorch |
Hello @hrzn . Anything else I should add? 😄 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
No it's good now, thanks for adding the test! Will merge once the tests are done, and it will be released when we release the new version of Darts. Thanks! |
Fixes #776.
Summary
The gridsearch method also returned the best metric score, aka the minimum error, along with the best model and parameters.