Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A question about the preset weight of Lora #165

Open
vigee88 opened this issue Sep 25, 2024 · 11 comments
Open

A question about the preset weight of Lora #165

vigee88 opened this issue Sep 25, 2024 · 11 comments

Comments

@vigee88
Copy link

vigee88 commented Sep 25, 2024

I generate images using two sets of layered parameters: 1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0
and
1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1
In theory, the images they generate should be the same, but the results are different. I want to know why? Which one is the correct result? What I want is for Lora to keep all double-blocks and ignore all single-blocks.
01

my workflow:
flux 分层lora 测试new.json

Also, I would like to ask what the means about first 1 at the beginning of each set of data.

@ltdrdata
Copy link
Owner

You need to check the input lora stucture via LoRA Block Info node.

@vigee88
Copy link
Author

vigee88 commented Sep 25, 2024

You need to check the input lora stucture via LoRA Block Info node.

Thanks very much for your reply,I tried two Loras,Lora A includes Base blocks,and Lora B does not include Base blocks。
But the images I generated using the same settings as before for these two Loras are still different.
May I ask how to set the parameters for Lora that does not include base blocks? Should I delete the first number 1 in preset?

@ltdrdata
Copy link
Owner

The base block is controlled by the first weight. From there, 38 blocks are double blocks, and the rest are weights for single blocks. If a LoRA doesn't have single blocks, those weights have no effect.

@vigee88
Copy link
Author

vigee88 commented Sep 26, 2024

The base block is controlled by the first weight. From there, 38 blocks are double blocks, and the rest are weights for single blocks. If a LoRA doesn't have single blocks, those weights have no effect.

Does that mean that if Lora contains base blocks, then I will use the preset directly? If Lora does not contain base blocks, then I need to delete the first weight 1 from the preset?
I'm sorry, my English has been forgotten for many years, and I don't know if I can express what I want to say.

@ltdrdata
Copy link
Owner

The base block is controlled by the first weight. From there, 38 blocks are double blocks, and the rest are weights for single blocks. If a LoRA doesn't have single blocks, those weights have no effect.

Does that mean that if Lora contains base blocks, then I will use the preset directly? If Lora does not contain base blocks, then I need to delete the first weight 1 from the preset? I'm sorry, my English has been forgotten for many years, and I don't know if I can express what I want to say.

base block weight (1st weight) should be given always.
Just you can ommit weights for single blocks.

@vigee88
Copy link
Author

vigee88 commented Sep 27, 2024

The base block is controlled by the first weight. From there, 38 blocks are double blocks, and the rest are weights for single blocks. If a LoRA doesn't have single blocks, those weights have no effect.

Does that mean that if Lora contains base blocks, then I will use the preset directly? If Lora does not contain base blocks, then I need to delete the first weight 1 from the preset? I'm sorry, my English has been forgotten for many years, and I don't know if I can express what I want to say.

base block weight (1st weight) should be given always. Just you can ommit weights for single blocks.

Thanks very much for your reply. Also, The last question,if there have a loader available that can run XY Plot for flux unet and lora?I tried to create a workflow by efficiency nodes, but it does not work properly and cannot connect to dependencies.

@ltdrdata
Copy link
Owner

The base block is controlled by the first weight. From there, 38 blocks are double blocks, and the rest are weights for single blocks. If a LoRA doesn't have single blocks, those weights have no effect.

Does that mean that if Lora contains base blocks, then I will use the preset directly? If Lora does not contain base blocks, then I need to delete the first weight 1 from the preset? I'm sorry, my English has been forgotten for many years, and I don't know if I can express what I want to say.

base block weight (1st weight) should be given always. Just you can ommit weights for single blocks.

Thanks very much for your reply. Also, The last question,if there have a loader available that can run XY Plot for flux unet and lora?I tried to create a workflow by efficiency nodes, but it does not work properly and cannot connect to dependencies.

For that, you would need to use a general XY Plot generation custom node and combine it using string manipulation functions, but there would be scalability issues.

I plan to add a different type of XY Plot tool to the Inspire Pack in the future.

@vigee88
Copy link
Author

vigee88 commented Sep 27, 2024

The base block is controlled by the first weight. From there, 38 blocks are double blocks, and the rest are weights for single blocks. If a LoRA doesn't have single blocks, those weights have no effect.

Does that mean that if Lora contains base blocks, then I will use the preset directly? If Lora does not contain base blocks, then I need to delete the first weight 1 from the preset? I'm sorry, my English has been forgotten for many years, and I don't know if I can express what I want to say.

base block weight (1st weight) should be given always. Just you can ommit weights for single blocks.

Thanks very much for your reply. Also, The last question,if there have a loader available that can run XY Plot for flux unet and lora?I tried to create a workflow by efficiency nodes, but it does not work properly and cannot connect to dependencies.

For that, you would need to use a general XY Plot generation custom node and combine it using string manipulation functions, but there would be scalability issues.

I plan to add a different type of XY Plot tool to the Inspire Pack in the future.

It sounds complicated, I don't know how to create it. Looking forward to your feature update, it's really a great job!I really admire programming experts like you :)

@pandayummy
Copy link

an all-in-one lora merge block weight XY plot test workflow is needed.

@vigee88
Copy link
Author

vigee88 commented Oct 12, 2024

The base block is controlled by the first weight. From there, 38 blocks are double blocks, and the rest are weights for single blocks. If a LoRA doesn't have single blocks, those weights have no effect.

Does that mean that if Lora contains base blocks, then I will use the preset directly? If Lora does not contain base blocks, then I need to delete the first weight 1 from the preset? I'm sorry, my English has been forgotten for many years, and I don't know if I can express what I want to say.

base block weight (1st weight) should be given always. Just you can ommit weights for single blocks.

Thanks very much for your reply. Also, The last question,if there have a loader available that can run XY Plot for flux unet and lora?I tried to create a workflow by efficiency nodes, but it does not work properly and cannot connect to dependencies.

For that, you would need to use a general XY Plot generation custom node and combine it using string manipulation functions, but there would be scalability issues.

I plan to add a different type of XY Plot tool to the Inspire Pack in the future.

hi,Sorry to bother you again.i found a workflow can run flux's lora for xy plot. https://civitai.com/articles/7984/flux-and-lora-grid-for-research
en,,i changed it,want to run BMW_lora node,but it cant work normally.
It look like KSampler and XY Input: Lora Block Weight conflicts with a certain function?

ComfyUI Error Report

Error Details

  • Node Type: KSampler (Efficient)
  • Exception Type: NameError
  • Exception Message: free variable 'token_normalization' referenced before assignment in enclosing scope

Stack Trace

  File "D:\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "D:\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "D:\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "D:\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "D:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 1580, in sample
    define_model(model, clip, clip_skip[0], refiner_model, refiner_clip, refiner_clip_skip[0],

  File "D:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 1418, in define_model
    encode_prompts(positive_prompt, negative_prompt, token_normalization, weight_interpretation,

System Information

  • ComfyUI Version: v0.2.2-100-g14eba07a
  • Arguments: D:\ComfyUI\main.py --auto-launch --fp8_e4m3fn-unet --preview-method auto --disable-cuda-malloc
  • OS: nt
  • Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.3.1+cu121

I want to know if it is possible for it to run properly after being modified? If possible, how can I set or modify parameters

my workflow:
Flux+Lora XY图表功能.json
thanks very much!

@ltdrdata
Copy link
Owner

The base block is controlled by the first weight. From there, 38 blocks are double blocks, and the rest are weights for single blocks. If a LoRA doesn't have single blocks, those weights have no effect.

Does that mean that if Lora contains base blocks, then I will use the preset directly? If Lora does not contain base blocks, then I need to delete the first weight 1 from the preset? I'm sorry, my English has been forgotten for many years, and I don't know if I can express what I want to say.

base block weight (1st weight) should be given always. Just you can ommit weights for single blocks.

Thanks very much for your reply. Also, The last question,if there have a loader available that can run XY Plot for flux unet and lora?I tried to create a workflow by efficiency nodes, but it does not work properly and cannot connect to dependencies.

For that, you would need to use a general XY Plot generation custom node and combine it using string manipulation functions, but there would be scalability issues.
I plan to add a different type of XY Plot tool to the Inspire Pack in the future.

hi,Sorry to bother you again.i found a workflow can run flux's lora for xy plot. https://civitai.com/articles/7984/flux-and-lora-grid-for-research en,,i changed it,want to run BMW_lora node,but it cant work normally. It look like KSampler and XY Input: Lora Block Weight conflicts with a certain function?

ComfyUI Error Report

Error Details

  • Node Type: KSampler (Efficient)
  • Exception Type: NameError
  • Exception Message: free variable 'token_normalization' referenced before assignment in enclosing scope

Stack Trace

  File "D:\ComfyUI\execution.py", line 323, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "D:\ComfyUI\execution.py", line 198, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)

  File "D:\ComfyUI\execution.py", line 169, in _map_node_over_list
    process_inputs(input_dict, i)

  File "D:\ComfyUI\execution.py", line 158, in process_inputs
    results.append(getattr(obj, func)(**inputs))

  File "D:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 1580, in sample
    define_model(model, clip, clip_skip[0], refiner_model, refiner_clip, refiner_clip_skip[0],

  File "D:\ComfyUI\custom_nodes\efficiency-nodes-comfyui\efficiency_nodes.py", line 1418, in define_model
    encode_prompts(positive_prompt, negative_prompt, token_normalization, weight_interpretation,

System Information

  • ComfyUI Version: v0.2.2-100-g14eba07a
  • Arguments: D:\ComfyUI\main.py --auto-launch --fp8_e4m3fn-unet --preview-method auto --disable-cuda-malloc
  • OS: nt
  • Python Version: 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
  • Embedded Python: false
  • PyTorch Version: 2.3.1+cu121

I want to know if it is possible for it to run properly after being modified? If possible, how can I set or modify parameters

my workflow: Flux+Lora XY图表功能.json thanks very much!

Due to the functional limitations of Efficient Nodes, this method cannot be used in FLUX.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants