-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Combining OMG with InstantID for multi-concept generation #3
Comments
InstantID can take multiple images as reference. The embeddings of all reference images are averaged as the input of the identity net (control net). |
The key idea is the two-stage generation and the noise blending. Why two-stage?
How to combine with InstantID or ID LoRAs
|
I see! Thanks for the reply! But I have one more question, if embeddings of multiple images are averaged as an input to the identityNet, would we expect some kind of mixture of facial features from different IDs? In other words, the performance of using an average of 3 ID image embeddings would look worse than using an average of 2 ID image embeddings, is that generally the case? |
i guess: |
hi, @yzhang2016 we tested that, the first stage's person face area will limit the second stage's user face generation, in other word, we found less face similarity in some case. |
Hi, thanks for the amazing job and I would like to ask what is the exact approach of using OMG with InstantID for multi-concept generation? I understand that you have the inference code available but I don't quite understand what it is doing.
As I know, the architecture of InstantID only takes in one input reference image, it would be good if I can get a high-level view of how you combine OMG with InstantID for multi-concept generation when there are more than one reference images, thank you!
The text was updated successfully, but these errors were encountered: