-
Notifications
You must be signed in to change notification settings - Fork 436
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generated wave were empty #122
Comments
Sorry this problem may seem stupid. But when i change is_training to True, there wasn't just silence. Although i still can not understand what it said. So, was it about batch normalization? @Kyubyong |
You're going to need to train for at least 150,000 steps I'd imagine. See the pretrained models. |
Thank u for your advice. Can i know how many steps had you trained and its performance |
I met this problem also... But even if I turn the is_training to True, the audio synthesized in synthesize mode is also far worse than in mode train. |
@frozen-finger How did you solve this problem? Can you please explain? |
The difference between the quality of audio generated during training and inference is because your model hasn't learned "attention". Make sure to look at the attention plots like the one here. If your model is learning attention, you should start to see a more or less diagonal line. This is also the reason why @nevercast suggested you train for many more steps. Most of my training sessions start producing decent attention plots around 60k steps. If your dataset has empty spaces at the start or end of the audio files, trimming those would greatly help with this problem. |
@TheNarrator Thanks for the response. @nevercast @frozen-finger @candlewill @Kyubyong There seems to be problem with predicted Mel(mel_hat) in synthesis.py, because I checked by providing the original Mel extracted from wavfile to mel_hat instead of predicting from the model, this is giving perfect result and it is sounding clean. So, I thought that mel_hat prediction is going wrong. Will it improve after more steps? |
I met the same problem as you, mel_gt&mag_gt is correct but mel_hat&mag_hat prediction goes wrong. And the audio synthesized is empty. Have you fix it? |
i have trained this for over 23k steps, but when using synthesis.py, the result seems empty. And i found that the generated mag to be normal. Can anyone tell me how to solve this problem?
The text was updated successfully, but these errors were encountered: