You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
009.wav
009.wav All right, so I have a question here for people who live inside of China. Traceback (most recent call last):
File "/content/GPT-SoVITS/GPT_SoVITS/prepare_datasets/1-get-text.py", line 92, in process
phones, word2ph, norm_text = clean_text(
File "/content/GPT-SoVITS/GPT_SoVITS/text/cleaner.py", line 46, in clean_text
phones = language_module.g2p(norm_text)
File "/content/GPT-SoVITS/GPT_SoVITS/text/english.py", line 365, in g2p
phone_list = _g2p(text)
File "/content/GPT-SoVITS/GPT_SoVITS/text/english.py", line 272, in call
tokens = pos_tag(words) # tuples of (word, tag)
File "/usr/local/lib/python3.9/site-packages/nltk/tag/init.py", line 168, in pos_tag
tagger = _get_tagger(lang)
File "/usr/local/lib/python3.9/site-packages/nltk/tag/init.py", line 110, in get_tagger
tagger = PerceptronTagger()
File "/usr/local/lib/python3.9/site-packages/nltk/tag/perceptron.py", line 183, in init
self.load_from_json(lang)
File "/usr/local/lib/python3.9/site-packages/nltk/tag/perceptron.py", line 273, in load_from_json
loc = find(f"taggers/averaged_perceptron_tagger{lang}/")
File "/usr/local/lib/python3.9/site-packages/nltk/data.py", line 579, in find
raise LookupError(resource_not_found)
The text was updated successfully, but these errors were encountered:
Here is the error output from Colab.
Resource averaged_perceptron_tagger_eng not found.
Please use the NLTK Downloader to obtain the resource:
For more information see: https://www.nltk.org/data.html
Attempted to load taggers/averaged_perceptron_tagger_eng/
Searched in:
- '/root/nltk_data'
- '/usr/local/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/local/lib/nltk_data'
- '/usr/share/nltk_data'
- '/usr/local/share/nltk_data'
- '/usr/lib/nltk_data'
- '/usr/local/lib/nltk_data'
009.wav
009.wav All right, so I have a question here for people who live inside of China. Traceback (most recent call last):
File "/content/GPT-SoVITS/GPT_SoVITS/prepare_datasets/1-get-text.py", line 92, in process
phones, word2ph, norm_text = clean_text(
File "/content/GPT-SoVITS/GPT_SoVITS/text/cleaner.py", line 46, in clean_text
phones = language_module.g2p(norm_text)
File "/content/GPT-SoVITS/GPT_SoVITS/text/english.py", line 365, in g2p
phone_list = _g2p(text)
File "/content/GPT-SoVITS/GPT_SoVITS/text/english.py", line 272, in call
tokens = pos_tag(words) # tuples of (word, tag)
File "/usr/local/lib/python3.9/site-packages/nltk/tag/init.py", line 168, in pos_tag
tagger = _get_tagger(lang)
File "/usr/local/lib/python3.9/site-packages/nltk/tag/init.py", line 110, in get_tagger
tagger = PerceptronTagger()
File "/usr/local/lib/python3.9/site-packages/nltk/tag/perceptron.py", line 183, in init
self.load_from_json(lang)
File "/usr/local/lib/python3.9/site-packages/nltk/tag/perceptron.py", line 273, in load_from_json
loc = find(f"taggers/averaged_perceptron_tagger{lang}/")
File "/usr/local/lib/python3.9/site-packages/nltk/data.py", line 579, in find
raise LookupError(resource_not_found)
The text was updated successfully, but these errors were encountered: