Skip to content
This repository was archived by the owner on Nov 1, 2024. It is now read-only.

why pretrain quantizer on librispeech dev-clean? #19

Open
0nutation opened this issue Jul 12, 2022 · 0 comments
Open

why pretrain quantizer on librispeech dev-clean? #19

0nutation opened this issue Jul 12, 2022 · 0 comments

Comments

@0nutation
Copy link

I want to ask that why did you choose to pretrain quantizer on librispeech dev-clean but not train-clean-100 or train-clean 360?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant