Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The behavior difference of 1.1 needs some experiments #955

Closed
1 task done
Orisen opened this issue Apr 21, 2023 · 22 comments
Closed
1 task done

The behavior difference of 1.1 needs some experiments #955

Orisen opened this issue Apr 21, 2023 · 22 comments
Labels
enhancement New feature or request

Comments

@Orisen
Copy link

Orisen commented Apr 21, 2023

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

What happened?

Using one Controlnet seems to work fine, but as soon i use more than one Controlnet, the image quality get's more worse since Controlnet 1.1

After i updated to Controlnet 1.1 i tested some images done with Controlnet 1.0
On the controlnet webui page it says, it is still possible to use the old models, so i did that.

You can still use all previous models in the previous ControlNet 1.0. Now, the previous "depth" is now called "depth_midas", the previous "normal" is called "normal_midas", the previous "hed" is called "softedge_edge". And starting from 1.1, all line maps, edge maps, lineart maps, boundary maps will have black background and white lines.

I used the preprocessors "canny", "depth_midas" and since i couldn't find the "softedge_edge" i assumed it is the "softedge_hed" preprocessor, so used this one.

2023-04-21 19_03_59-Stable Diffusion

Here are the controlnet 1.0 parameters from the PNG info of the img:

ControlNet-0 Enabled: True, ControlNet-0 Module: canny, ControlNet-0 Model: controlnetPreTrained_cannyV10 [e3fe7712], ControlNet-0 Weight: 0.75, ControlNet-0 Guidance Start: 0, ControlNet-0 Guidance End: 1, ControlNet-1 Enabled: True, ControlNet-1 Module: hed, ControlNet-1 Model: controlnetPreTrained_hedV10 [13fee50b], ControlNet-1 Weight: 0.75, ControlNet-1 Guidance Start: 0, ControlNet-1 Guidance End: 1, ControlNet-2 Enabled: True, ControlNet-2 Module: depth, ControlNet-2 Model: controlnetPreTrained_depthV10 [400750f6], ControlNet-2 Weight: 0.75, ControlNet-2 Guidance Start: 0, ControlNet-2 Guidance End: 1

Here is the image quality i kinda expected. This was done with Controlnet 1.0. To be fair i didn't expected the exact same picture, but what i get now is just way worse. (see after):

controlnet-v10

And here is the image i'm getting now with Controlnet 1.1 and the exact same parameters. Way to overblown, like some values got applied doubled or trippelt:

controlnet-v11

Also strange, after i generated an image the detectmaps get displayed twice now. In Controlnet 1.0 this wasn't the case also, but to be fair i'm not sure if this means anything. But when i think of the impression that some values got applied doubled or trippelt, than maybe there is a bug somewhere where maybe 1 controlnet get's applied twice, but idk, just an idea.

2023-04-21 19_31_02-Stable Diffusion

I tested this now with different SD models like Protogen or RevAnimated, to make sure the SD model is not the problem.
I also tried it with Guess Mode on and off.

As soon you use more than 1 Controlnet the image quality get's worse.

Here is the full PNG info if needed:

a (portrait:1.2) of a (woman|female:1.4) with short blonde hair|blue hair|white hair, under water, valorant style, samdoesarts, greg tocchini, jeremy mann, aleksi briclot, ellen jewett, alphonse mucha, masterpiece, best quality, highly detailed, (no lora or embed:0.0)
Negative prompt: anime, realistic, (ugly:1.2), childish, immature, disfigured, deformed, (bad proportions:1.2), (bad anatomy:1.2), bad hands, bad eyes, missing fingers, extra digit, fewer digits, oversaturated, grain, lowres, worst quality, low quality, signature, watermark, poorly drawn, poorly drawn face, poorly drawn hands, poorly drawn eyes, poorly drawn clothes, bad clothes, naked clothes, fused limbs, missing limbs, floating limbs, disconnected limbs, long neck, long body, duplicate, (round face:1.2), (beard:1.2), wavy hair, wild hair, bad hair, face tattoo, face mark, elf ears, glasses, intricate, complex, complicated, deformityv6-embed, badhandv4-embed, deepnegative-v1-75-embed
Steps: 10, Sampler: DPM++ 2S a Karras, CFG scale: 7.5, Seed: 808080, Size: 480x640, Denoising strength: 0.35, ControlNet-0 Enabled: True, ControlNet-0 Module: canny, ControlNet-0 Model: controlnetPreTrained_cannyV10 [e3fe7712], ControlNet-0 Weight: 0.75, ControlNet-0 Guidance Start: 0, ControlNet-0 Guidance End: 1, ControlNet-1 Enabled: True, ControlNet-1 Module: hed, ControlNet-1 Model: controlnetPreTrained_hedV10 [13fee50b], ControlNet-1 Weight: 0.75, ControlNet-1 Guidance Start: 0, ControlNet-1 Guidance End: 1, ControlNet-2 Enabled: True, ControlNet-2 Module: depth, ControlNet-2 Model: controlnetPreTrained_depthV10 [400750f6], ControlNet-2 Weight: 0.75, ControlNet-2 Guidance Start: 0, ControlNet-2 Guidance End: 1, Hires upscale: 1.25, Hires steps: 5, Hires upscaler: Lanczos, Postprocess upscale by: 2, Postprocess upscaler: 4x-UltraSharp

Steps to reproduce the problem

  1. Go to ControlNet Settings in the WebUI and set "Multi ControlNet: Max models amount (requires restart)" to 3
  2. Load any Model of your choice.
  3. Setup 2-3 Controlnet Models
    Here are some controlnet 1.1 parameters from an image generated with Model hash: 44f90a0972, Model: protogenX34Photorealism_1:

ControlNet-0 Enabled: True, ControlNet-0 Module: canny, ControlNet-0 Model: controlnetPreTrained_cannyV10 [e3fe7712], ControlNet-0 Weight: 0.75, ControlNet-0 Guidance Start: 0, ControlNet-0 Guidance End: 1, ControlNet-1 Enabled: True, ControlNet-1 Module: softedge_hed, ControlNet-1 Model: controlnetPreTrained_hedV10 [13fee50b], ControlNet-1 Weight: 0.75, ControlNet-1 Guidance Start: 0, ControlNet-1 Guidance End: 1, ControlNet-2 Enabled: True, ControlNet-2 Module: depth_midas, ControlNet-2 Model: controlnetPreTrained_depthV10 [400750f6], ControlNet-2 Weight: 0.75, ControlNet-2 Guidance Start: 0, ControlNet-2 Guidance End: 1

  1. Press Generate and get disgusted by the poor quality.

What should have happened?

See above. Should look like the picture generated with Controlnet 1.0

Commit where the problem happens

webui: 22bcc7be
controlnet: 8d0a015

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

No. I don't use any COMMANDLINE_ARGS

Console logs

venv "D:\Stable Diffusion Web UI\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Commit hash: 22bcc7be428c94e9408f589966c2040187245d81
Installing requirements for Web UI

Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
Loading weights [44f90a0972] from D:\Stable Diffusion Web UI\models\Stable-diffusion\others\s_protogenX34Photorealism_1.safetensors
Creating model from config: D:\Stable Diffusion Web UI\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Loading VAE weights specified in settings: D:\Stable Diffusion Web UI\models\VAE\vae-ft-mse-840000-ema-pruned.ckpt
Applying cross attention optimization (Doggettx).
Textual inversion embeddings loaded(8): aid28-embed, badhandv4-embed, badv4-embed, badv5-embed, deepnegative-v1-75-embed, deformityv6-embed, neeko-embed, verybadimage-v1-3-embed
Model loaded in 2.5s (load weights from disk: 0.2s, create model: 0.2s, apply weights to model: 0.5s, apply half(): 0.4s, load VAE: 0.1s, move model to device: 0.3s, load textual inversion embeddings: 0.6s).
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 6.0s (import torch: 0.8s, import gradio: 0.6s, import ldm: 0.3s, other imports: 0.5s, load scripts: 0.6s, load SD checkpoint: 2.7s, create ui: 0.3s, gradio launch: 0.1s).
                                  Loading model: controlnetPreTrained_cannyV10 [e3fe7712]
Loaded state_dict from [D:\Stable Diffusion Web UI\extensions\sd-webui-controlnet\models\controlnetPreTrained_cannyV10.safetensors]
Loading config: D:\Stable Diffusion Web UI\extensions\sd-webui-controlnet\models\cldm_v15.yaml
ControlNet model controlnetPreTrained_cannyV10 [e3fe7712] loaded.
Loading preprocessor: canny
Pixel Perfect Mode Enabled.
resize_mode = ResizeMode.INNER_FIT
raw_H = 4096
raw_W = 3072
target_H = 640
target_W = 480
estimation = 480.0
preprocessor resolution = 512
Loading model: controlnetPreTrained_hedV10 [13fee50b]
Loaded state_dict from [D:\Stable Diffusion Web UI\extensions\sd-webui-controlnet\models\controlnetPreTrained_hedV10.safetensors]
Loading config: D:\Stable Diffusion Web UI\extensions\sd-webui-controlnet\models\cldm_v15.yaml
ControlNet model controlnetPreTrained_hedV10 [13fee50b] loaded.
Loading preprocessor: hed
Pixel Perfect Mode Enabled.
resize_mode = ResizeMode.INNER_FIT
raw_H = 4096
raw_W = 3072
target_H = 640
target_W = 480
estimation = 480.0
preprocessor resolution = 512
ControlNet preprocessor location: D:\Stable Diffusion Web UI\extensions\sd-webui-controlnet\annotator\downloads
Loading model: controlnetPreTrained_depthV10 [400750f6]
Loaded state_dict from [D:\Stable Diffusion Web UI\extensions\sd-webui-controlnet\models\controlnetPreTrained_depthV10.safetensors]
Loading config: D:\Stable Diffusion Web UI\extensions\sd-webui-controlnet\models\cldm_v15.yaml
ControlNet model controlnetPreTrained_depthV10 [400750f6] loaded.
Loading preprocessor: depth
Pixel Perfect Mode Enabled.
resize_mode = ResizeMode.INNER_FIT
raw_H = 4096
raw_W = 3072
target_H = 640
target_W = 480
estimation = 480.0
preprocessor resolution = 512
100%|██████████████████████████████████████████████████████████████████████████████████| 10/10 [00:07<00:00,  1.28it/s]
100%|████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:04<00:00,  1.05it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 15/15 [02:02<00:00,  8.13s/it]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 15/15 [02:02<00:00,  1.20s/it]

Additional information

No response

@lllyasviel
Copy link
Collaborator

This seems to related to a issue begin from
#720

The "Use mid-control on highres pass (second pass)" is removed since that pull request, and now if you use high-rex fix, the full ControlNet will be applied to two passes. Also, you will get two control images for each controlnet - one is for diffusion and one is for high-res fix diffusion

I will add "Use mid-control on highres pass (second pass)" back today so that you can still use it.

Using this option means that when you use controlnet in high-res fix, your controlnet has almostly no influence in the high-res pass (so it only control low-res pass).

@lllyasviel
Copy link
Collaborator

lllyasviel commented Apr 21, 2023

also we are not sure if this problem is related to controlnet.
In your setting it seems that your image resolution is 480, which is less than 512. Not sure if this is user mistake (are you sure that your previous results are also achieved with image size 480? because usually people knows that stable diffusion's performance is worse for images less than 512)

also i see "Steps: 10, Sampler: DPM++ 2S a Karras, CFG scale: 7.5"

are you sure that your previous high quality 1.0 results are achieved with 10 steps in 480 resolution? because this parameter is not very commonly used in high quality images

@lllyasviel
Copy link
Collaborator

hello we added "Use only-mid-control on high-res. fix (second pass)" back. you can now use it in the setting.

@lllyasviel
Copy link
Collaborator

hello i tried several settings but cannot reproduce your problem. can you share input image so that we can reproduce

@lllyasviel
Copy link
Collaborator

I am going to close the issue given the strange paramters: 480 resolution, 10 steps.

Feel free to share the input image and/or more details for reproduce.

we will open this issue again if we can confirm that the problem is related to controlnet.

@lllyasviel
Copy link
Collaborator

lllyasviel commented Apr 22, 2023

Besides, I observed that the depth map generated by your cn1.0 looks like the input of your test of cn1.1

image

If you are using depth map and hed map extracted from your output images, then your tests for cn1.0 and cn1.1 are using different depth/hed/Canny maps. With different inputs, it is impossible to get same outputs.

@lllyasviel lllyasviel added the invalid This doesn't seem right label Apr 22, 2023
@Orisen
Copy link
Author

Orisen commented Apr 22, 2023

Thanks for taking a look on this! I'm surprised that you couldn't recreate the issue. I literly can recreate it everytime, with any input image and SD model. But i think i found the reason for the overblowing colors and it's the cfg scale that is now applied to cn1.1
i remember in cn1.0 you had a setting for choosing if cfg scale should be used or not. in cn1.0 i had this turned off.
in your screenshot you have a cfg scale of 7 i had it set to 7.5 that's why the images in your screenshot looks more sane and correct.
i tried this on my end and when i set my cfg scale to 2 i'm getting something more close to what cn1.0 created.

xyz_grid-0106-808080-a (portrait_1 2) of a (woman_female_1 4) with short blonde hair_blue hair_white hair, under water, valorant style, samdoesarts,

is it possible to get this option back? or maybe we could have the option to set the cfg scale individual for each control net.

@lllyasviel lllyasviel removed the invalid This doesn't seem right label Apr 22, 2023
@lllyasviel lllyasviel reopened this Apr 22, 2023
@lllyasviel
Copy link
Collaborator

the previous "cfg-based control" is not technically correct and it is removed in 1.1.
In 1.1 it is always equivalent to turning the previous "cfg-based control" off.
In previous 1.0, turning "cfg-based control" off has nothing to do with your A1111 cfg-scale.
If cfg 7 does not produce good results in 1.1, then cfg 7 cannot produce good results in 1.0, no matter how you set "cfg-based control", no matter you turn it on or off.
It is impossible that you set cfg 7 in 1.0 and set "cfg-based control" off and get good results. You must have been using other cfg scales in 1.0 if that is the case

@lllyasviel
Copy link
Collaborator

let us know if you have futher questions. the issue will be closed for now

@Orisen
Copy link
Author

Orisen commented Apr 23, 2023

Alright! Thanks for the clarification. The settings i wrote are exactly the same settings i used with cn1.0. from the 480 resolution to 10 steps with the Karras Sampler to the 7,5 cfg scale and it worked every time. the images i got were good, clean and precise. But i accept defeat now and have to admit that something in cn1.1 works just fundamentally different and i have to look into new settings.
If things doesn't work out for me, is there a way to go back to cn1.0 somehow?
Thanks again for taking a look on this. Much appreciated! Have a good one!

@lllyasviel
Copy link
Collaborator

Hello @Orisen You can use the old version here
https://github.com/lllyasviel/webui-controlnet-v1-archived

Note that you need to completely remove [sd-webui-controlnet] folder before you use previous version.

@Orisen
Copy link
Author

Orisen commented Apr 23, 2023

Hello @lllyasviel! Sry for bothering again but i care to much about this and i want the best for control net, because i love it!

Thanks to the cn10 archive i could test my image generations again and as expected everything worked fine again and i screenshoted all the settings. For both cn10 and cn11 and maybe this time it's more clearer what is happening. Maybe it's a bug, maybe a feature, maybe a happy accident, but somehow cn10 creates better images than cn11. Maybe the Guess Mode works differently idk. Let me show you.

First here the controlnet 1.0 stuff

sd cn10 settings

canny settings

canny cn10 settings

depth settings

depth cn10 settings

hed settings

hed cn10 settings

the final pic and the maps

final pic
canny map cn10
depth map cn10
hed map cn10

the control net settings in settings

cn10 settings in settings

And here the controlnet 1.1 stuff

sd cn11 settings

canny settings

canny cn11 settings

depth settings

depth cn11 settings

hed settings

hed cn11 settings

the final pic and the maps

2023-04-23 17_11_15-Stable Diffusion
canny map cn11
depth map cn11
hed map cn11

the contolnet settings in settings

cn11 settings in settings

I also tested it with a different Model and a different Input.

Link to the model: https://civitai.com/models/44150/expmixline
Link to the cn models: https://civitai.com/models/9251/controlnet-pre-trained-models

Here the Input image:

input image

And here again first the cn10 stuff

alternate input sd cn10 settings
alternate input canny cn10 settings
alternate input depth cn10 settings
alternate input hed cn10 settings

And the cn11 stuff

alternate input sd cn11 settings
alternate input canny cn11 settings
alternate input depth cn11 settings
alternate input hed cn11 settings

the png info:

a (portrait:1.2) of a (woman|female:1.4) with short blonde hair|blue hair|white hair, under water, valorant style, samdoesarts, greg tocchini, jeremy mann, aleksi briclot, ellen jewett, alphonse mucha, masterpiece, best quality, highly detailed, (no lora or embed:0.0)
Negative prompt: anime, realistic, (ugly:1.2), childish, immature, disfigured, deformed, (bad proportions:1.2), (bad anatomy:1.2), bad hands, bad eyes, missing fingers, extra digit, fewer digits, oversaturated, grain, lowres, worst quality, low quality, signature, watermark, poorly drawn, poorly drawn face, poorly drawn hands, poorly drawn eyes, poorly drawn clothes, bad clothes, naked clothes, fused limbs, missing limbs, floating limbs, disconnected limbs, long neck, long body, duplicate, (round face:1.2), (beard:1.2), wavy hair, wild hair, bad hair, face tattoo, face mark, elf ears, glasses, intricate, complex, complicated, deformityv6-embed, badhandv4-embed, deepnegative-v1-75-embed
Steps: 10, Sampler: DPM++ 2S a Karras, CFG scale: 7.5, Seed: 808080, Size: 480x640, Model hash: 4d651c7638, Model: others_a_expmixLine_v2, Denoising strength: 0.35, ControlNet-0 Enabled: True, ControlNet-0 Module: canny, ControlNet-0 Model: controlnetPreTrained_cannyV10 [e3fe7712], ControlNet-0 Weight: 0.75, ControlNet-0 Guidance Start: 0, ControlNet-0 Guidance End: 1, ControlNet-1 Enabled: True, ControlNet-1 Module: depth, ControlNet-1 Model: controlnetPreTrained_depthV10 [400750f6], ControlNet-1 Weight: 0.75, ControlNet-1 Guidance Start: 0, ControlNet-1 Guidance End: 1, ControlNet-2 Enabled: True, ControlNet-2 Module: hed, ControlNet-2 Model: controlnetPreTrained_hedV10 [13fee50b], ControlNet-2 Weight: 0.75, ControlNet-2 Guidance Start: 0, ControlNet-2 Guidance End: 1, Hires upscale: 1.25, Hires steps: 5, Hires upscaler: Lanczos

01120-808080-a (portrait_1 2) of a (woman_female_1 4) with short blonde hair_blue hair_white hair, under water, valorant style, samdoesarts,
01121-808080-a (portrait_1 2) of a (woman_female_1 4) with short blonde hair_blue hair_white hair, under water, valorant style, samdoesarts,

So, yeah there you have it. I hope this clears things up. If you know what is going on, than please let me know. Because right now cn10 feels like blackmagic and i actually would love to use cn11 to be honest.

@lllyasviel
Copy link
Collaborator

interesting. we will take a look soon

@lllyasviel lllyasviel reopened this Apr 23, 2023
@lllyasviel lllyasviel changed the title [Bug]: No Support for Multi Controlnet yet? Image Quality way worse since Controlnet 1.1 The behavior difference of 1.1 needs some experiments Apr 23, 2023
@lllyasviel lllyasviel added the enhancement New feature or request label Apr 23, 2023
@lllyasviel
Copy link
Collaborator

lllyasviel commented Apr 24, 2023

Edit: The problem is related to Guess Mode.

Below is results without Guess Mode

Meta

a (portrait:1.2) of a (woman|female:1.4) with short blonde hair|blue hair|white hair, under water, valorant style, samdoesarts, greg tocchini, jeremy mann, aleksi briclot, ellen jewett, alphonse mucha, masterpiece, best quality, highly detailed, (no lora or embed:0.0)
Negative prompt: anime, realistic, (ugly:1.2), childish, immature, disfigured, deformed, (bad proportions:1.2), (bad anatomy:1.2), bad hands, bad eyes, missing fingers, extra digit, fewer digits, oversaturated, grain, lowres, worst quality, low quality, signature, watermark, poorly drawn, poorly drawn face, poorly drawn hands, poorly drawn eyes, poorly drawn clothes, bad clothes, naked clothes, fused limbs, missing limbs, floating limbs, disconnected limbs, long neck, long body, duplicate, (round face:1.2), (beard:1.2), wavy hair, wild hair, bad hair, face tattoo, face mark, elf ears, glasses, intricate, complex, complicated, badhandv4, ng_deepnegative_v1_75t
Steps: 10, Sampler: DPM++ 2S a Karras, CFG scale: 7.5, Seed: 808080, Size: 480x640, Model hash: 4d651c7638, Model: expmixLine_v2, Denoising strength: 0.35, ControlNet-0 Enabled: True, ControlNet-0 Module: canny, ControlNet-0 Model: control_sd15_canny [fef5e48e], ControlNet-0 Weight: 0.75, ControlNet-0 Guidance Start: 0, ControlNet-0 Guidance End: 1, ControlNet-1 Enabled: True, ControlNet-1 Module: depth, ControlNet-1 Model: control_depth-fp16 [400750f6], ControlNet-1 Weight: 0.75, ControlNet-1 Guidance Start: 0, ControlNet-1 Guidance End: 1, ControlNet-2 Enabled: True, ControlNet-2 Module: hed, ControlNet-2 Model: control_hed-fp16 [13fee50b], ControlNet-2 Weight: 0.75, ControlNet-2 Guidance Start: 0, ControlNet-2 Guidance End: 1, Hires upscale: 1.25, Hires steps: 5, Hires upscaler: Lanczos

Used Embedding

Used embeddings: badhandv4 [dba1], ng_deepnegative_v1_75t [1a3e]

(I cannot find deformityv6)

Previous ControlNet 1.0

image

All ControlNet settings same to yours

Result from 1.0

image

ControlNet Extension 1.1

image

All ControlNet settings same to yours

Result from 1.1

image

Conclusion

  1. we cannot detect any obvious difference between cn1.0 and cn1.1's support for 1.0 models.
  2. we cannot reproduce your results in cn1.0. It is not clear how your results are achieved and what you have modified.

@lllyasviel
Copy link
Collaborator

wait a minute, just find out that are all in guess mode.
let me investigate a bit more.

@lllyasviel
Copy link
Collaborator

update: now we can confirm that this is related to guess mode. investigating

@lllyasviel
Copy link
Collaborator

God. it seems that cn1.0 has a big BUG and if you use --lowvram or --medvram, the guess mode will have an unexpected effect to make results look better.

Perhaps we will have a big announcement soon.

@lllyasviel
Copy link
Collaborator

Hello we fixed all problems in
#1011

Feel free to test and see if problems are fixed

@lllyasviel
Copy link
Collaborator

Now ControlNet 1.1.09's new Control Mode can reproduce the previous similar effects

a (portrait:1.2) of a (woman|female:1.4) with short blonde hair|blue hair|white hair, under water, valorant style, samdoesarts, greg tocchini, jeremy mann, aleksi briclot, ellen jewett, alphonse mucha, masterpiece, best quality, highly detailed, (no lora or embed:0.0)
Negative prompt: anime, realistic, (ugly:1.2), childish, immature, disfigured, deformed, (bad proportions:1.2), (bad anatomy:1.2), bad hands, bad eyes, missing fingers, extra digit, fewer digits, oversaturated, grain, lowres, worst quality, low quality, signature, watermark, poorly drawn, poorly drawn face, poorly drawn hands, poorly drawn eyes, poorly drawn clothes, bad clothes, naked clothes, fused limbs, missing limbs, floating limbs, disconnected limbs, long neck, long body, duplicate, (round face:1.2), (beard:1.2), wavy hair, wild hair, bad hair, face tattoo, face mark, elf ears, glasses, intricate, complex, complicated, badhandv4, ng_deepnegative_v1_75t
Steps: 10, Sampler: DPM++ 2S a Karras, CFG scale: 7.5, Seed: 808080, Size: 480x640, Model hash: 4d651c7638, Model: expmixLine_v2, Denoising strength: 0.35, ControlNet-0 Enabled: True, ControlNet-0 Module: canny, ControlNet-0 Model: control_sd15_canny [fef5e48e], ControlNet-0 Weight: 1, ControlNet-0 Guidance Start: 0, ControlNet-0 Guidance End: 1, ControlNet-1 Enabled: True, ControlNet-1 Module: depth_midas, ControlNet-1 Model: control_depth-fp16 [400750f6], ControlNet-1 Weight: 1, ControlNet-1 Guidance Start: 0, ControlNet-1 Guidance End: 1, ControlNet-2 Enabled: True, ControlNet-2 Module: softedge_hed, ControlNet-2 Model: control_hed-fp16 [13fee50b], ControlNet-2 Weight: 1, ControlNet-2 Guidance Start: 0, ControlNet-2 Guidance End: 1, Hires upscale: 2, Hires steps: 5, Hires upscaler: Lanczos

image

image

image

image

Note that you need

image

The result for 808080 is

image

I do not have deformityv6 so perhaps a bit different but difference is minor

@lllyasviel
Copy link
Collaborator

also i do not have that upscale script you have so that mine is a bit blur

@lllyasviel
Copy link
Collaborator

This is result with 0.75 weight

image

a (portrait:1.2) of a (woman|female:1.4) with short blonde hair|blue hair|white hair, under water, valorant style, samdoesarts, greg tocchini, jeremy mann, aleksi briclot, ellen jewett, alphonse mucha, masterpiece, best quality, highly detailed, (no lora or embed:0.0)
Negative prompt: anime, realistic, (ugly:1.2), childish, immature, disfigured, deformed, (bad proportions:1.2), (bad anatomy:1.2), bad hands, bad eyes, missing fingers, extra digit, fewer digits, oversaturated, grain, lowres, worst quality, low quality, signature, watermark, poorly drawn, poorly drawn face, poorly drawn hands, poorly drawn eyes, poorly drawn clothes, bad clothes, naked clothes, fused limbs, missing limbs, floating limbs, disconnected limbs, long neck, long body, duplicate, (round face:1.2), (beard:1.2), wavy hair, wild hair, bad hair, face tattoo, face mark, elf ears, glasses, intricate, complex, complicated, badhandv4, ng_deepnegative_v1_75t
Steps: 10, Sampler: DPM++ 2S a Karras, CFG scale: 7.5, Seed: 808080, Size: 480x640, Model hash: 4d651c7638, Model: expmixLine_v2, Denoising strength: 0.35, ControlNet-0 Enabled: True, ControlNet-0 Module: canny, ControlNet-0 Model: control_sd15_canny [fef5e48e], ControlNet-0 Weight: 0.75, ControlNet-0 Guidance Start: 0, ControlNet-0 Guidance End: 1, ControlNet-1 Enabled: True, ControlNet-1 Module: depth_midas, ControlNet-1 Model: control_depth-fp16 [400750f6], ControlNet-1 Weight: 0.75, ControlNet-1 Guidance Start: 0, ControlNet-1 Guidance End: 1, ControlNet-2 Enabled: True, ControlNet-2 Module: softedge_hed, ControlNet-2 Model: control_hed-fp16 [13fee50b], ControlNet-2 Weight: 0.75, ControlNet-2 Guidance Start: 0, ControlNet-2 Guidance End: 1, Hires upscale: 2, Hires steps: 5, Hires upscaler: Lanczos

image

@Orisen
Copy link
Author

Orisen commented Apr 24, 2023

Awesome!
I tested it myself and can confirm everything is fixed. The minor differences are absolutely expected. That is really great! Thank you so much!

2023-04-24 07_11_14-Stable Diffusion
2023-04-24 07_11_28-Stable Diffusion
2023-04-24 07_11_38-Stable Diffusion
2023-04-24 07_11_47-Stable Diffusion
01123-808080-a (portrait_1 2) of a (woman_female_1 4) with short blonde hair_blue hair_white hair, under water, valorant style, samdoesarts,

If you want you can test it again on your side.
Here is the link to the deformity embedding:
https://civitai.com/models/16807?modelVersionId=53718

And here is the Upscaler:
https://mega.nz/folder/qZRBmaIY#nIG8KyWFcGNTuMX_XNbJ_g

directly taken from: https://upscale.wiki/wiki/Model_Database

Thanks a lot again! This is great! Very much appreciated!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants