-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A question about the preset weight of Lora #165
Comments
You need to check the input lora stucture via |
Thanks very much for your reply,I tried two Loras,Lora A includes Base blocks,and Lora B does not include Base blocks。 |
The base block is controlled by the first weight. From there, 38 blocks are double blocks, and the rest are weights for single blocks. If a LoRA doesn't have single blocks, those weights have no effect. |
Does that mean that if Lora contains base blocks, then I will use the preset directly? If Lora does not contain base blocks, then I need to delete the first weight 1 from the preset? |
base block weight (1st weight) should be given always. |
Thanks very much for your reply. Also, The last question,if there have a loader available that can run XY Plot for flux unet and lora?I tried to create a workflow by efficiency nodes, but it does not work properly and cannot connect to dependencies. |
For that, you would need to use a general XY Plot generation custom node and combine it using string manipulation functions, but there would be scalability issues. I plan to add a different type of XY Plot tool to the Inspire Pack in the future. |
It sounds complicated, I don't know how to create it. Looking forward to your feature update, it's really a great job!I really admire programming experts like you :) |
an all-in-one lora merge block weight XY plot test workflow is needed. |
hi,Sorry to bother you again.i found a workflow can run flux's lora for xy plot. https://civitai.com/articles/7984/flux-and-lora-grid-for-research ComfyUI Error ReportError Details
Stack Trace
System Information
I want to know if it is possible for it to run properly after being modified? If possible, how can I set or modify parameters my workflow: |
Due to the functional limitations of Efficient Nodes, this method cannot be used in FLUX. |
I generate images using two sets of layered parameters: 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
and
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
In theory, the images they generate should be the same, but the results are different. I want to know why? Which one is the correct result? What I want is for Lora to keep all double-blocks and ignore all single-blocks.
my workflow:
flux 分层lora 测试new.json
Also, I would like to ask what the means about first 1 at the beginning of each set of data.
The text was updated successfully, but these errors were encountered: