You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the awesome inference library! I'm writing to request two features that would be beneficial to RL post-training workloads.
In online PPO (or GRPO, online DPO), the policy model will perform auto-regressive generation (using vLLM or other inference engines) and fwd + bwd computation with training infrastructure. Therefore, in the training stage, we hope to free the KVCache and even offload the model parameter stored in the vLLM (as the model parallel strategies during generation and training could be different).
Therefore, we propose two sets of APIs in the Worker, GPUExecutor, LLMEngine, and LLM classes and one model init choice:
free_cache_engine() and init_cache_engine(): The users can call the free_cache_engine from an instance of LLM and the calling chain could be LLM.free_cache_engine() -> LLMEngine.free_cache_engine() -> GPUExecutor.free_cache_engine() -> Worker.free_cache_engine(). A similar calling chain applies to init_cache_engine() while the Worker.init_cache_engine() will simply call the _init_cache_engine() in the Worker class.
After generation, the RL framework can call the llm.free_cache_engine() to release KVCache and after update_policy, it will call llm.init_cache_egine(). We have implemented an example in the veRL framework. See veRL, which utilize a SPMD version of veRL ([RFC]: Fully SPMD Execution for Offline Inference #11400)
offload_model_weights(): We maintain a self.cpu_model in the Worker and the calling chain is similar to above. After generation, the RL framework will call the llm.offload_model_weights() to offload the weight to CPU and reload it back in the next iteration
Model init choice: Currently, the vLLM Engine will initialize the model from the AutoModel.from_pretrain(). However, in RL workloads, we hope vLLM can provide an option that only initializes the model without downloading the pre-trained weights. Instead, we will later synchronize the model with an HF model outside the vLLM Engine.
Potential Issues:
When using free_cache_engine and offload_model_weights, we have to disable the CUDAGraph, which could reduce the generation throughput.
One issue in SGLang observes a similar problem: sgl-project/sglang#2542
Currently, in veRL, we simply set enforce_eager=True in all settings.
It would be better to use CUDAGraph in generation and avoid KVCache and model weights in training!
Looking forward to your responses and thanks for any help!
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
🚀 The feature, motivation and pitch
Thanks for the awesome inference library! I'm writing to request two features that would be beneficial to RL post-training workloads.
In online PPO (or GRPO, online DPO), the policy model will perform auto-regressive generation (using vLLM or other inference engines) and fwd + bwd computation with training infrastructure. Therefore, in the training stage, we hope to free the KVCache and even offload the model parameter stored in the vLLM (as the model parallel strategies during generation and training could be different).
Therefore, we propose two sets of APIs in the
Worker
,GPUExecutor
,LLMEngine
, andLLM
classes and one model init choice:free_cache_engine()
andinit_cache_engine()
: The users can call thefree_cache_engine
from an instance ofLLM
and the calling chain could beLLM.free_cache_engine() -> LLMEngine.free_cache_engine() -> GPUExecutor.free_cache_engine() -> Worker.free_cache_engine()
. A similar calling chain applies toinit_cache_engine()
while theWorker.init_cache_engine()
will simply call the_init_cache_engine()
in the Worker class.After generation, the RL framework can call the
llm.free_cache_engine()
to release KVCache and afterupdate_policy
, it will callllm.init_cache_egine()
. We have implemented an example in the veRL framework. See veRL, which utilize a SPMD version of veRL ([RFC]: Fully SPMD Execution for Offline Inference #11400)offload_model_weights()
: We maintain aself.cpu_model
in theWorker
and the calling chain is similar to above. After generation, the RL framework will call thellm.offload_model_weights()
to offload the weight to CPU and reload it back in the next iterationAutoModel.from_pretrain()
. However, in RL workloads, we hope vLLM can provide an option that only initializes the model without downloading the pre-trained weights. Instead, we will later synchronize the model with an HF model outside the vLLM Engine.Potential Issues:
When using
free_cache_engine
andoffload_model_weights
, we have to disable the CUDAGraph, which could reduce the generation throughput.One issue in SGLang observes a similar problem: sgl-project/sglang#2542
Currently, in veRL, we simply set
enforce_eager=True
in all settings.It would be better to use CUDAGraph in generation and avoid KVCache and model weights in training!
Looking forward to your responses and thanks for any help!
CC
@comaniac @WoosukKwon @youkaichao @happierpig
Alternatives
No response
Additional context
No response
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: