Skip to content
This repository has been archived by the owner on Jun 24, 2024. It is now read-only.

Added support for multiple LoRA adapters #232

Merged
merged 5 commits into from
May 18, 2023
Merged

Added support for multiple LoRA adapters #232

merged 5 commits into from
May 18, 2023

Conversation

LLukas22
Copy link
Contributor

@LLukas22 LLukas22 commented May 16, 2023

Closes #227.

Moved the loading logic into the LoraPatches struct and added the LoraAdapter struct which abstracts multiple LoRA patches.

Changed "loar_adapter" to "lora_adapters" in the ModelParameters.

@philpax
Copy link
Collaborator

philpax commented May 16, 2023

This looks good, but I think LoraPatches should be renamed to LoraAdapter and the current LoraAdapter's logic should be moved back into the loader.

LoraAdapter's current logic doesn't add much (and unwraps - you can use .collect::<Result<Vec<_>>>()? to avoid that), and the name is confusing (since it can apply multiple LoRAs).

Apart from that, good to go from me.

@LLukas22
Copy link
Contributor Author

Alright i will move the loading logic back over and just iterate over the LoraAdapter's while loading the tensors.

@LLukas22
Copy link
Contributor Author

This should be ready, but i havent tested it yet.

@LLukas22
Copy link
Contributor Author

Old LoRA adapters fail with the following error:

✗ Failed to load model    
Error:
   0: Could not load model
   1: invariant broken: 256 <= 2 in Some("E:\\GGML_Models\\LoRA\\alpaca-7B.bin")

Is this a side-effect of the new quantization formats?

@philpax
Copy link
Collaborator

philpax commented May 17, 2023

Can you provide more information about which models you tried it with? That sounds like something's misaligned because it comes from

return Err(LoadError::InvariantBroken(format!("{n_dims} <= {ne_len}")));

@LLukas22
Copy link
Contributor Author

@LLukas22
Copy link
Contributor Author

Alright, i passed the wrong paths to the LoRA adapter 🤦 . Its now working and tested it with the model and the two adapters linked above.

@philpax philpax merged commit 9e1158f into rustformers:main May 18, 2023
@LLukas22 LLukas22 deleted the feat/multiple-LoRA-support branch May 22, 2023 12:53
@hhamud hhamud mentioned this pull request Aug 7, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support multiple LoRA adapters
2 participants