Demo and sample files for "Emotional End-to-End Neural Speech synthesizer" https://arxiv.org/pdf/1711.05447.pdf, which is
a Korean Emotional End-to-End Neural Speech synthesizer, based on the Tacotron model (Y. Wang et al., “Tacotron: Towards End-to-End Speech Synthesis,” arXiv Prepr. arXiv1703.10135, 2017).
Code is forked from https://github.com/keithito/tacotron
For demo, click http://143.248.97.172:9000/
You can find synthesized waves and corresponding attention alignment plots of a sentence in 6 different emotions here.
As an example, Mel spectrograms of a sentence in happy and sad emotions are given here. You may find significant differences in their prosodic aspects. The Mel spectrogram corresponded to the happy signal shows more variations in pitch and it's shorter in time.
The samples in this zip file belong to Section 4 of our paper (Younggun Lee, Azam Rabiee, Soo-Young Lee, "Emotional End-to-End Neural Speech synthesizer", accepted in Machine Learning for Audio Signal Processing (NIPS workshop), 2017)