Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR
Q8_0
when not using FA. This was broken because onlineQ8_0
quantization packed quants into blocks of 128 (block_q8_0_x4
), soK*Q
became garbage when usingQ8_0
quantized K-cache without FA.FA performance improvements are for
AVX2/Zen4
. The following table showsPP-512
comparison between the main branch and this PR with FA usingbf16
orQ8_0
for KV cache. Model is LLaMA-3.1-8B quantized toIQ4_XS
and run-time-repacked toIQ4_XS_R4
. The CPU is Ryzen 7950X. When the quoted uncertainty in the table is zero, I have run just a single repetition inllama-bench
(it takes quite a while to process 16k or even 32k tokens)