-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loss exploded??? #2
Comments
I am facing the same issue. Please let me know if you have resolved it. |
actually, it is a common occurrence when dealing with a variational autoencoder. Two way to resolve it
|
Thanks for your reply, I will take it immediately! |
sorry, I didn't reply to you in time. I have been trying some other work recently, so I haven't solved this problem |
@WhiteFu if you are using this code then use large (more than 50 hours) expressive dataset like a blizzard for getting a decent result. |
hi, I have the same problem that I supposed to modified some hparams but it still not work.Please let me know if you have solved this. thx😄 |
The loss is not stable, so you can modify the upper limit of the parameter In the file train.py on line 133, |
hi, but it seems my loss = nan (every time at the same step when training) and I try to modify the batch size or learning rate but it still not work. |
@MisakaMikoto96 aware of |
I get the error "loss explode" in the training stage!
I'm not modifying the original hyperparameters, and I want to know how to solve the problem.
The text was updated successfully, but these errors were encountered: