re-scaling at each iteration does wonders #85
lorenzozanisi
started this conversation in
Show and tell
Replies: 2 comments
-
Great! I assume you made those changes in your branch which isn't really compatible with main, could you copy those changes into a branch that can be merged into main |
Beta Was this translation helpful? Give feedback.
0 replies
-
Two things to note:
@zcapjdb I'm running this again with 20,000 inputs for 20 iterations, just to check whether this behaviour is stable, and then will merge to main |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am now unscaling all datasets at the end of each iteration, adding the (unscaled) candidates to the (unscaled) train, then create a new scaler based on this new training set and rescale everything else. The improvement is crazy. As usual, train on the left, test on the right.
@zcapjdb , following from #78 I was getting these low losses because I was doing this new re-scaling method, but I forgot...
Beta Was this translation helpful? Give feedback.
All reactions