Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Avoid KV Cache and offload Model weights in RL workloads #11638

Open
1 task done
PeterSH6 opened this issue Dec 30, 2024 · 0 comments
Open
1 task done

[Feature]: Avoid KV Cache and offload Model weights in RL workloads #11638

PeterSH6 opened this issue Dec 30, 2024 · 0 comments

Comments

@PeterSH6
Copy link

PeterSH6 commented Dec 30, 2024

🚀 The feature, motivation and pitch

Thanks for the awesome inference library! I'm writing to request two features that would be beneficial to RL post-training workloads.

In online PPO (or GRPO, online DPO), the policy model will perform auto-regressive generation (using vLLM or other inference engines) and fwd + bwd computation with training infrastructure. Therefore, in the training stage, we hope to free the KVCache and even offload the model parameter stored in the vLLM (as the model parallel strategies during generation and training could be different).

Therefore, we propose two sets of APIs in the Worker, GPUExecutor, LLMEngine, and LLM classes and one model init choice:

  • free_cache_engine() and init_cache_engine(): The users can call the free_cache_engine from an instance of LLM and the calling chain could be LLM.free_cache_engine() -> LLMEngine.free_cache_engine() -> GPUExecutor.free_cache_engine() -> Worker.free_cache_engine(). A similar calling chain applies to init_cache_engine() while the Worker.init_cache_engine() will simply call the _init_cache_engine() in the Worker class.
    After generation, the RL framework can call the llm.free_cache_engine() to release KVCache and after update_policy, it will call llm.init_cache_egine(). We have implemented an example in the veRL framework. See veRL, which utilize a SPMD version of veRL ([RFC]: Fully SPMD Execution for Offline Inference #11400)
  • offload_model_weights(): We maintain a self.cpu_model in the Worker and the calling chain is similar to above. After generation, the RL framework will call the llm.offload_model_weights() to offload the weight to CPU and reload it back in the next iteration
  • Model init choice: Currently, the vLLM Engine will initialize the model from the AutoModel.from_pretrain(). However, in RL workloads, we hope vLLM can provide an option that only initializes the model without downloading the pre-trained weights. Instead, we will later synchronize the model with an HF model outside the vLLM Engine.

Potential Issues:
When using free_cache_engine and offload_model_weights, we have to disable the CUDAGraph, which could reduce the generation throughput.
One issue in SGLang observes a similar problem: sgl-project/sglang#2542
Currently, in veRL, we simply set enforce_eager=True in all settings.
It would be better to use CUDAGraph in generation and avoid KVCache and model weights in training!

Looking forward to your responses and thanks for any help!

CC

@comaniac @WoosukKwon @youkaichao @happierpig

Alternatives

No response

Additional context

No response

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant