You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When loading test datasets, your code used "sklearn.preprocessing.StandardScaler" to scale the original datasets. So, in order to produce the correct testing results, it is expected to scale the test-data back to the original values when calculating evaluation metrics? (the metrics should be evaluated based on the orginal values, not the scaled ones)
The reported MSE and MAE in the paper's table are relative values, so there’s no need to scale them back to calculate the testing values. However, if you wish to obtain the absolute MSE and MAE, you will need to perform that scaling.
Hi there,
When loading test datasets, your code used "sklearn.preprocessing.StandardScaler" to scale the original datasets. So, in order to produce the correct testing results, it is expected to scale the test-data back to the original values when calculating evaluation metrics? (the metrics should be evaluated based on the orginal values, not the scaled ones)
For example, I found "test_data.inverse_transform" was (correctly) done in TSLib (https://github.com/thuml/Time-Series-Library/blob/main/exp/exp_long_term_forecasting.py). However, I couldn't find similar treatments anywhere in your code?
Could you kindly point to where your code scaled the test-data back to original values before obtaining the final evaluation results? Much appreciated
The text was updated successfully, but these errors were encountered: