Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vllm+lora #49

Closed
loxs123 opened this issue Feb 21, 2025 · 1 comment
Closed

vllm+lora #49

loxs123 opened this issue Feb 21, 2025 · 1 comment

Comments

@loxs123
Copy link

loxs123 commented Feb 21, 2025

peft+vllm微调情况,现在的代码可能还存在bug(vllm无法正常加载权重)。

# x_r1/x_grpo_trainer.py

# line 364-366
unwrapped_model.merge_adapter()
state_dict = unwrapped_model.state_dict()
unwrapped_model.unmerge_adapter()

或许第366行应该去掉

原始issue:huggingface/trl#2856

@dhcode-cpp
Copy link
Owner

peft

感谢反馈,我在明天更新好。

@loxs123 loxs123 closed this as completed Feb 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants