Skip to content

Commit

Permalink
[V1][PP] Fix memory profiling in PP (#13315)
Browse files Browse the repository at this point in the history
Signed-off-by: Woosuk Kwon <[email protected]>
  • Loading branch information
WoosukKwon authored Feb 15, 2025
1 parent 6a854c7 commit 0c73026
Showing 1 changed file with 6 additions and 5 deletions.
11 changes: 6 additions & 5 deletions vllm/v1/worker/gpu_model_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -1158,11 +1158,12 @@ def profile_run(self) -> None:
# Trigger compilation for general shape.
hidden_states = self._dummy_run(self.max_num_tokens,
dummy_kv_caches)
if not get_pp_group().is_last_rank:
return hidden_states
hidden_states = hidden_states[logit_indices]
logits = self.model.compute_logits(hidden_states, None)
# TODO(woosuk): Consider the memory usage of the sampler.
if get_pp_group().is_last_rank:
hidden_states = hidden_states[logit_indices]
logits = self.model.compute_logits(hidden_states, None)
# TODO(woosuk): Consider the memory usage of the sampler.
else:
logits = None
torch.cuda.synchronize()
del hidden_states, logits
self.encoder_cache.clear()
Expand Down

0 comments on commit 0c73026

Please sign in to comment.