Skip to content
forked from ttslr/MAM-BERT

[IEEE/ACM-TASLP'2024] Text-to-Speech for Low-Resource Agglutinative Language With Morphology-Aware Language Model Pre-Training

Notifications You must be signed in to change notification settings

AI-S2-Lab/MAM-BERT

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MAM-BERT

Text-to-Speech for Low-Resource Agglutinative Language With Morphology-Aware Language Model Pre-Training

Authors: Rui Liu, Yifan Hu, Haolin Zuo, Zhaojie Luo, Longbiao Wang and Guanglai Gao.

This paper was accepted by IEEE/ACM-TASLP 2024.

Speech samples

Speech samples are available at demo page.

Citing

To cite this repository:

@ARTICLE{10379131,
  author={Liu, Rui and Hu, Yifan and Zuo, Haolin and Luo, Zhaojie and Wang, Longbiao and Gao, Guanglai},
  journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, 
  title={Text-to-Speech for Low-Resource Agglutinative Language With Morphology-Aware Language Model Pre-Training}, 
  year={2024},
  volume={32},
  number={},
  pages={1075-1087},
  keywords={Linguistics;Data models;Speech processing;Decoding;Morphology;Training;Acoustics;Text-to-speech (TTS);agglutinative;morphology;language modeling;pre-training},
  doi={10.1109/TASLP.2023.3348762}
}

About

[IEEE/ACM-TASLP'2024] Text-to-Speech for Low-Resource Agglutinative Language With Morphology-Aware Language Model Pre-Training

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • HTML 95.1%
  • CSS 4.9%