-
Notifications
You must be signed in to change notification settings - Fork 6.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Suggestion] - multi GPU parallel usage #292
Comments
This |
where |
Hey, parallel support has not (yet) been implemented. All issues with label Enhancement are ready for (re-)evaluation and prioritisation of the backlog. |
Hey, could you let me know how to enable multi GPU support. Does it require codebase improvement or some system configuration? Looking forward to hearing from you. |
what does it mean? |
I assume that this comes down to an estimated 3-5 days of development effort. |
Hi, @mashb1t , I would like to contact you on skype. Could you share your whatsapp number or skype id? |
|
@oldhand7 No. All communication regarding Fooocus should happen in this repository. You can open a discussion in the category "ideas" or "q&a" to exchange. |
Sorry, I am afraid I did wrong. I would like to know whether it is possible and if so, how to implement. You mentioned 3 ~ 5 days of work will be expected. Are those on the way or just estimate? Can I have a look at the current work status? |
@oldhand7 the last update of fooocus introduced a --multi-users flag, which currently has no effect. I assume that either ldm_patched is being worked on or this has been added as a general preparation for the future already. |
Actually, I 've just tried to do it myself for multi processing, but I don't think I get the right point for this. I 've just changed webui.py for multi-threading, but It didn't work. May I ask what parts should be improved or What is the key part to be impleemented for this function? And Do I need to use other python libs or so? Do I have to changed whole structures? Maybe I would like to contribute to you. Thanks. |
@oldhand7 Key part is to make the model management incl. all caches abd memory optimisations work for both one and multiple GPUs as well as handling multiple async_worker processes + yielding correctly to gradio. |
AFAIK, you mentioned here it needs less than 5 days of work. So Do you have any exact plan or idea to implement in right way? |
Can you help me with this implementation? |
Is it possible? If so , Can I contribute to this implementation? Ofc, I may need your great hand. |
hey @oldhand7, I dig your enthusiasm but I find your netiquette quite lacking -- please stop spamming the multitude of users subscribed to this issue and open a new discussion about this topic instead, as mashb1t suggested earlier. |
last comment for me on this matter: continued in #2292 for anybody who wants to follow along |
That people want it to happen. |
The software works fine with 1 gpu, but it completely ignores other. It would be nice if it could automaticly generate few images at the same time depending on the amount of gpus computer has.
The text was updated successfully, but these errors were encountered: