Loss of precision/decimal places - preprocessing required? #275
Replies: 1 comment
-
D'oh. I had explicitly set a formatting option to display float values to 2dp so, mercifully, you can ignore me. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I apologise in advance for the rudimentary nature of this question.
I've been experimenting with chronos to make predictions on various timeseries data. The initial approach used ChronosPipeline.from_pretrained to perform zero-shot forecasting, and I would do a little preparation of the data by subtracting the minimum value of the series, forecasting, and adding it back on again. This works fine and gives some decent forecasts.
I've then moved on to looking at using chronos with autogluon, not least because I want to fine-tune against specific flavours of time series. To do this I've created TimeSeriesDataFrames, fine tuned with TimeSeriesPredictor, but I do not presently have any kind of preliminary processing of the data. This works fine too, except when I have a timeseries where the significant variation, as it were, is after the decimal point e.g. data like 1.22336,,,,,1.23557. When I inspect the predictions from a forecast on a timeseries of this magnitude, I will just see a list of results to 2dp, like 1.22 ad infinitum.
I am, perhaps mistakenly, under the impression that AG performs any preprocessing automagically. Is this so? Do I need to perform some manner of preprocessing of data in this scenario (and if so, how might I achieve that?), or am I simply missing a parameter somewhere that would allow me to specify a greater degree of precision in the resulting predictions?
I hope I've explained that in a way that's actually understandable.
Beta Was this translation helpful? Give feedback.
All reactions