You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am a bit confused about the explicit workflow for attribute manipulation in the img2img generation process.
how can I insert a material/layout/content from an image in the generation process since there is no style_dir param?
Do I need to finetune/train the model on only one image an then use the prompts according to the T2I prompts in the Read.Me (* gets replaced by the learned embedding)? What is the general idea behind the finetuning of the model?
The text was updated successfully, but these errors were encountered:
Hi. The target concept is injected by replacing the * with the learned token embedding provided by the hypernet. Only the hypernet is trainable during training, and the LDM model is always frozen. The framework works in a per-image-per-hypernet manner.
Hi thanks for your work!
I am a bit confused about the explicit workflow for attribute manipulation in the img2img generation process.
how can I insert a material/layout/content from an image in the generation process since there is no style_dir param?
Do I need to finetune/train the model on only one image an then use the prompts according to the T2I prompts in the Read.Me (* gets replaced by the learned embedding)? What is the general idea behind the finetuning of the model?
The text was updated successfully, but these errors were encountered: