-
Notifications
You must be signed in to change notification settings - Fork 27.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Switching to another model and then back leads to different image (SDXL) #12619
Comments
Thanks for testing. |
I would try to disable third party extensions and test again |
Yeah I will test disabling extensions. I can 100% reproduce that issue and did that multiple times, even after restarting and reloading everything. You can find dynaVisionXL on civitai. I will test with another lora as soon as I find the time.
|
when you have time also test with a completely new install |
I'm having this same issue too! Really frustrating when the image you want to reproduce or go back to and tweak the prompt a little to slight change it is now yielding completely different results. I'm now using model:https://civitai.com/models/119279 I'm using a Lora: I tend to create a txt2img then having found a good starting image I use loopback wave script in img2img to create a set of image frames to animate with. Hope we can get to the bottom of why this maybe happening. I'm running rtx2070 maxq (8gbVram) And I've never been able to load the base sdxl model successfully it seems to out of memory error/crash. I can only seem to load custom sdxl models. I will do some more testing playing around and post any more updates here. Likewise if you need any more info data from me to help fix this issue I'm happy to help. |
make sure that what you're seeing is not caused by SDP so if what you're saying is SDP not deterministic then it's not a bug |
--opt-sdp-attention I have started using the above argument as suggested for performance/time to generate images But the difference in the images are greatly noticeable |
hi again here are the two image examples using the exact same promt/model/settings etc and as you can see rather more diffrent than i would perhaps expect from what i now know can be the case when implementing argumenta like xformers. |
@djdookie |
I made progress and I found a solution how to fix this!! I did more tests: I upgraded my graphic cards driver, tried another browser, and different settings. (1) Maximum number of checkpoints loaded at the same time: 1 -> 2 Step (1) made the issue disappear! |
So I guess it has todo with the kind of how the webUI loads/reloads the checkpoints. |
@w-e-w @TomTomGit86 Can you tell me what is your setting at "Maximum number of checkpoints loaded at the same time"? |
Maximum number of checkpoints loaded at the same time |
its not anywhere as diffrent as my experiences tho, where do i find the setting for maximum number of checkpoints, am i beign stupid im assuming you mean in auto1111 ive opened the all setting view and searched for it but cant seem to find it but maybe not looking properly excited at the fact we mught have a fix! lol |
Set it to 2, click apply settings, restart webUI and see if the issue is still there please! |
set checkpoints to cache in ram?, i dont have the option maximum number of checkpoints lodaed? |
irrc |
Oh yeah just recognized that. ^^ |
ah i was going to try out the new dev branch to see if that might the the situation, do you know the cmd for dev branch? |
|
Try your current version with "checkpoints to cache in RAM" = 2 first pls. |
okay, erm cd/? |
Ok I tested your linked Art3mis LoRa in both versions. In short: Even the image I get from the model I swap to (dynaVision XL in my tests) looks different. Conclusion: It seems if I load another model from disk and not cache, the LoRa weight get corrupted somehow. If I swap back to the first model, again loading from disk and not cache, the LoRa weights are still corrupted. |
yeah having the checkpoints to cache in RAM" = 2
Thank you so much for helping with this issue was gonna drive me round the bend you can spend a fair bit of time gening the image you want and then knowing it has disaperead from recreation, tweaking can be really frustrating! kudos @w-e-w |
Here are the pictures of my latest tests for Artemis LoRa v1.0 and 2.0: image 1 = SDXL base Art3mis LoRa v1.0: Maximum number of checkpoints loaded at the same time = 2 (issues fixed): Art3mis LoRa v2.0: Maximum number of checkpoints loaded at the same time = 2 (issues fixed): |
pls put img in tabe to save space
or
substitute ABC for images |
yeah would you advise staying on the master branch or switching to the dev branch? |
yeah soz can we do that natively here? or you do it pre posting? |
you can preview before you post, you can also edit the post after words
|
lol i did it mate :-) or you did and I thought I'd done it. Agree tho looks better truncated |
Would you recommend setting vae to cache = 2 also? |
I think this can save some time if you switch VAEs a lot and if you can spare the RAM. |
I admit you can see the issue way clearer on my LoRa. Art3mis LoRa is good enough to demonstrate the issue and it is publicly available. |
If you look at my examples it was wildly different. |
I'd call it a workaround I discovered after identifying the issue and narrowing it down as much as possible. But for a real fix it's the coders turn now. ;) |
Any progress on this? |
Is there a way to reproduce what this bug was doing via model merges? Because i actually like many of the results I was getting after triggering this issue... |
Please see #13516 |
Is there an existing issue for this?
What happened?
If I generate an image with sdxl_base_1.0_0.9vae with one self trained LoRa (with strength 0.8),
then switch the model to another custom SDXL1.0 model, generate another image with the exact same prompt,
and then switch back to the first model and generate an image with the exact same parameters again, it looks different/broken.
This only happens if I use a LoRa.
When I completely restart the webUI I can generate the correct first image again.
So for me it seems that something is not unloaded/replaced in memory correctly.
Edit: It looks like the LoRa is applied with strength 1.0 (instead of 0.8) after switching model and switching back.
And it is always applied even if I set it to strength 0.0 or remove it from prompt!
Steps to reproduce the problem
What should have happened?
the last image should be the same as the first
Version or Commit where the problem happens
v1.5.1, also tried current dev branch build, same issue
What Python version are you running on ?
Python 3.11.x (above, no supported yet)
What platforms do you use to access the UI ?
Windows
What device are you running WebUI on?
Nvidia GPUs (RTX 20 above)
Cross attention optimization
sdp
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
List of extensions
openpose-editor, sd-webui-controlnet, sd-webui-openpose-editor, sd-webui-refiner, sd-webui-roop-nsfw, ultimate-upscale-for-automatic1111
Console logs
Additional information
No response
The text was updated successfully, but these errors were encountered: