Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"test_data.inverse_transform" not found before test/evaluation for long-term forecasting? #43

Open
wekaxx opened this issue Jul 11, 2024 · 1 comment

Comments

@wekaxx
Copy link

wekaxx commented Jul 11, 2024

Hi there,

When loading test datasets, your code used "sklearn.preprocessing.StandardScaler" to scale the original datasets. So, in order to produce the correct testing results, it is expected to scale the test-data back to the original values when calculating evaluation metrics? (the metrics should be evaluated based on the orginal values, not the scaled ones)

For example, I found "test_data.inverse_transform" was (correctly) done in TSLib (https://github.com/thuml/Time-Series-Library/blob/main/exp/exp_long_term_forecasting.py). However, I couldn't find similar treatments anywhere in your code?

Could you kindly point to where your code scaled the test-data back to original values before obtaining the final evaluation results? Much appreciated

@tianzhou2011
Copy link
Contributor

The reported MSE and MAE in the paper's table are relative values, so there’s no need to scale them back to calculate the testing values. However, if you wish to obtain the absolute MSE and MAE, you will need to perform that scaling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants