Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error during Finetuning (AttributeError: 'CLIPTextEmbeddings' object has no attribute 'embedding_forward') #4

Open
curioIX opened this issue Jun 20, 2023 · 4 comments

Comments

@curioIX
Copy link

curioIX commented Jun 20, 2023

Got the following error while trying to finetune the model. Python package versions used are same as in the environment.yaml file

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/lib/python3.8/multiprocessing/spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "/usr/lib/python3.8/multiprocessing/spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
  File "/root/fine-tuning/ProSpect/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1177, in __getattr__
    raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'CLIPTextEmbeddings' object has no attribute 'embedding_forward'

Python package versions

  • taming-transformers-rom1504==0.0.6
  • transformers==4.18.0
  • torch==1.10.2+cu111
  • torch-fidelity==0.3.0
  • torchmetrics==0.6.0
  • torchvision==0.11.3+cu111
  • pytorch-lightning==1.5.9
@curioIX
Copy link
Author

curioIX commented Jun 20, 2023

Update: It's working fine with single GPU training, but for multi GPU the error persists

@carvychen
Copy link

hi, sorry for bothering you. Do you know about the input param "opt.embedding_manager_ckpt"?
image

@curioIX
Copy link
Author

curioIX commented Jul 10, 2023

@carvychen

It's the path to the embedding checkpoint which u want to initialize from. If you are training a new embedding, you won't need to set this parameter but if you want to use a previously trained embedding for the new training, you can set the respective path.

Example
python main.py --base configs/stable-diffusion/v1-finetune.yaml -t --actual_resume ./models/sd/v1-5-pruned.ckpt -n obj --data_root data/obj --embedding_manager_ckpt logs/obj-v1_prospect.pt

Where 'logs/obj-v1_prospect.pt' is the embedding checkpoint from a previous training.

@carvychen
Copy link

Thanks for your kind help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants