-
Notifications
You must be signed in to change notification settings - Fork 10.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batched inference with greedy sampling yields different completions #6583
Comments
This is an effect from using unified KV cache: ggerganov/whisper.cpp#1941 (comment) |
Hi, @ggerganov , I saw your comment here at #4130
Is there any plan for this implementation? Sometimes greedy generations with different outcome can be a trouble. |
No plan at the moment on my side. Haven't figure out a good way to implement this yet |
I've been investigating the performance of models with batched inference. I had expected slightly different results based on the number of parallel sequences being evaluated (i.e. some small amount of random noise), but I have instead noticed a very distinct downward trend. i.e. more sequences leads to less accuracy on test set! Is this expected? Evaluating against the Google BoolQ dataset, vertical axis shows accuracy percentage (note it starts at 48%), horizontal axis shows number of sequences (each sequence answering an independent question): |
This is not expected |
Thanks for confirming that. I'll do some more digging into this to see if I can turn up anything more. |
I tried running the BoolQ dataset again, but this time asking each question in N parallel sequences. As far as I can tell this always produces the same answer across all sequences. no matter how many parallel sequences I run (up to 64). There's some variance in accuracy with different sequence counts, but nothing as huge as before. This is not what I had expected! Here's what that looks like: Note that when running this test I made sure that no tokens were shared between sequences in the prompt batch, so each sequence is totally independent. |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
Using batched.cpp example, modified to use greedy sampling, yields different completions (sample output below).
I'm using Windows, llama.cpp compiled by w64devkit on laptop with RTX3070.
Correct me if I'm wrong, but sampling with greedy sampler (i.e. always picking the most likely next token) should yield same result for same prompt, always (for same model).
Can this be a result of model quantizaton (I'm using 6K quantized llama2-chat gguf and tried also with 8 bit)?
Note: llama.cpp was compiled with no CUDA, so this is all on CPU.
The text was updated successfully, but these errors were encountered: