Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Add vLLM counter metrics access through Triton (#7493) #7546

Merged
merged 1 commit into from
Aug 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions build.py
Original file line number Diff line number Diff line change
Expand Up @@ -1806,6 +1806,10 @@ def backend_clone(
os.path.join(build_dir, be, "src", "model.py"),
backend_dir,
)
clone_script.cpdir(
os.path.join(build_dir, be, "src", "utils"),
backend_dir,
)

clone_script.comment()
clone_script.comment(f"end '{be}' backend")
Expand Down
6 changes: 6 additions & 0 deletions docs/user_guide/metrics.md
Original file line number Diff line number Diff line change
Expand Up @@ -378,3 +378,9 @@ Further documentation can be found in the `TRITONSERVER_MetricFamily*` and
The TRT-LLM backend uses the custom metrics API to track and expose specific metrics about
LLMs, KV Cache, and Inflight Batching to Triton:
https://github.com/triton-inference-server/tensorrtllm_backend?tab=readme-ov-file#triton-metrics

### vLLM Backend Metrics

The vLLM backend uses the custom metrics API to track and expose specific metrics about
LLMs to Triton:
https://github.com/triton-inference-server/vllm_backend?tab=readme-ov-file#triton-metrics
Loading