You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Got the following error while trying to finetune the model. Python package versions used are same as in the environment.yaml file
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/usr/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "/usr/lib/python3.8/multiprocessing/spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
File "/root/fine-tuning/ProSpect/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1177, in __getattr__
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'CLIPTextEmbeddings' object has no attribute 'embedding_forward'
Python package versions
taming-transformers-rom1504==0.0.6
transformers==4.18.0
torch==1.10.2+cu111
torch-fidelity==0.3.0
torchmetrics==0.6.0
torchvision==0.11.3+cu111
pytorch-lightning==1.5.9
The text was updated successfully, but these errors were encountered:
It's the path to the embedding checkpoint which u want to initialize from. If you are training a new embedding, you won't need to set this parameter but if you want to use a previously trained embedding for the new training, you can set the respective path.
Got the following error while trying to finetune the model. Python package versions used are same as in the
environment.yaml
filePython package versions
The text was updated successfully, but these errors were encountered: