Skip to content

Commit

Permalink
[V1][Minor] Remove outdated comment (vllm-project#12968)
Browse files Browse the repository at this point in the history
Signed-off-by: Woosuk Kwon <[email protected]>
  • Loading branch information
WoosukKwon authored and kerthcet committed Feb 21, 2025
1 parent bf81ce0 commit c526702
Showing 1 changed file with 0 additions and 2 deletions.
2 changes: 0 additions & 2 deletions vllm/v1/core/kv_cache_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -205,8 +205,6 @@ def allocate_slots(
# Should not exceed the maximum number of blocks per request.
# This is especially because the block table has the shape
# [..., max_num_blocks_per_req].
# TODO(woosuk): Check and reject requests if
# num_prompt_tokens + max_tokens > max_model_len.
self.max_num_blocks_per_req - len(req_blocks),
)
assert num_new_blocks > 0
Expand Down

0 comments on commit c526702

Please sign in to comment.