You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
for qwen series, such as Qwen/Qwen2.5-7B-Instruct, it seems that vllm cannot apply quantization to it.
no matther for bitsandbytes or awq .
even for unsloth version, unsloth/Qwen2.5-7B-Instruct-bnb-4bit it does not work either.
error message: AttributeError: Model Qwen2ForCausalLM does not support BitsAndBytes quantization yet.
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
Your current environment
gpu A10
vllm version: 0.6.3.post1
Model Input Dumps
No response
🐛 Describe the bug
for qwen series, such as
Qwen/Qwen2.5-7B-Instruct
, it seems that vllm cannot apply quantization to it.no matther for
bitsandbytes
orawq
.even for
unsloth
version,unsloth/Qwen2.5-7B-Instruct-bnb-4bit
it does not work either.error message:
AttributeError: Model Qwen2ForCausalLM does not support BitsAndBytes quantization yet.
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: