You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
and received this error when loading non_lora_trainables.bin into the model
RuntimeError: Error(s) in loading state_dict for LlavaMistralForCausalLM:
size mismatch for model.mm_projector.weight: copying a param with shape torch.Size([4096, 1024]) from checkpoint, the shape in current model is torch.Size([2097152, 1]).
There is no problem loading the Bakkala-1 checkpoint by running
After doing a lora finetuning on the pretrained model using finetune_lora.sh, I tried to do inference using CLI
and received this error when loading
non_lora_trainables.bin
into the modelThere is no problem loading the Bakkala-1 checkpoint by running
Can I have some help to solve this crash when trying to use the LoRA created from the finetuning please? Thank you
The text was updated successfully, but these errors were encountered: