Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
fix gemma loading after quantization or LoRA.
lm_head is not used in vllm as it is tied weight with embed_token. Sometimes duplicate lm_head layers are added when the structure of the model is newly created by quantization, LoRA, etc. To avoid the error that occurs, skip loading lm_head.weight.
- Loading branch information