Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dequantize_4bit() gives wrong output when working in cuda graph mode #1308

Closed
chenqianfzh opened this issue Aug 6, 2024 · 4 comments · Fixed by #1330
Closed

dequantize_4bit() gives wrong output when working in cuda graph mode #1308

chenqianfzh opened this issue Aug 6, 2024 · 4 comments · Fixed by #1330
Assignees
Labels
bug Something isn't working contributions-welcome We welcome contributions to fix this issue!

Comments

@chenqianfzh
Copy link

System Info

Linux

Reproduction

I am trying to implement BitsAndBytes in vLLM (https://github.com/vllm-project/vllm). My implementation with eager-mode works right and was merged.

However, I found that the weight given by dequantize_4bit() under cuda graph mode is different from the eager mode, which makes the model output nonsense output.

Wonder anybody has some insights on this issue?

I tried to put it in a simple script. Yet it turned out to be hard as it is non-trivial to capture the cuda graph. Yet it is a consistent repro and I would be more than happy to work with the community members to show the data I have collected.

Expected behavior

The cuda graph mode is expected to output the same dequantized tensors as the eager mode.

@matthewdouglas matthewdouglas added bug Something isn't working contributions-welcome We welcome contributions to fix this issue! labels Aug 8, 2024
@matthewdouglas
Copy link
Member

Thank you for bringing this to our attention @chenqianfzh! I'm not personally aware of a known issue here and do believe it's worth investigating further. If you could help to provide some more details on repro steps that would be appreciated!

For bookkeeping, this relates to vLLM issue: vllm-project/vllm#5569 and the current workaround is to enforce eager mode: vllm-project/vllm#6846

cc: @Titus-von-Koeller

@jeejeelee
Copy link
Contributor

@matthewdouglas @chenqianfzh

I also encountered the same problem mentioned above. I conducted a simple investigation, and the likely cause seems to be that the kernel kDequantizeBlockwise did not pass the stream(This phenomenon is common in BNB). If you want to investigate further, you can refer to cudagraph test for relevant verification.

@devlup
Copy link

devlup commented Sep 11, 2024

hi @matthewdouglas vllm is waiting for a new release to pick up this fix, since they use pypi, wondering when you plan to checkpoint a new release?

@Titus-von-Koeller
Copy link
Collaborator

@devlup We're planning release early this week.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working contributions-welcome We welcome contributions to fix this issue!
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants