-
Notifications
You must be signed in to change notification settings - Fork 11.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add QK=128 q4_1 for GPTQ #937
Conversation
make gptq model (not support act-order) use python 3.10
|
do not force the prompt file to end with a new line (#908)
Can you provide perplexity comparison between original |
Currently, some parts of my experiment seem to be wrong (set ctx=1024). In particular, the result of the current experiment is that the result is worse than that of Q4_1. Because of this, I will close the current issue. |
Currently, GPTQ is converted to q4_1(QK=32) format. However, this is very inefficient considering that GPTQ generally recommends a QK value of 128.
And to solve this, I created a new q4_1 (qk=128) called q4_2.
Applying this gives an approximate 20% speed improvement and a 17% reduction in memory usage on GPTQ.
I'm sure this implementation will make llama run faster and more robustly.
Currently, I don't have an ARM cpu, so ARM may not work because I haven't tested the operation.