-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add GRPC error codes to GRPC streaming if enabled by user. #7499
Conversation
@@ -0,0 +1,52 @@ | |||
# Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it possible to move the model to the L0_backend_python/lifecycle
folder? I think it might be easier this way to track which test the models belong to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't have any models or model folder in
https://github.com/triton-inference-server/server/tree/main/qa/L0_backend_python/lifecycle
In test.sh we copy over from python_models and create the models folder with versions inside.
I have kept this new model in parallel to existing models used in L0_backend_python/lifecycle
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can always create a models
subfolder under L0_*
tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks Olga, that's what I meant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the existing test.sh we do
rm -fr *.log ./models
before we start the test.
rm -fr *.log ./models |
For me to add models to
L0_backend_python/lifecycle
I will need to remove this.We cleanup the models before every test, I will need to remove all the instances where we delete the models folder.
server/qa/L0_backend_python/lifecycle/test.sh
Line 103 in 5320009
rm -rf models/ |
server/qa/L0_backend_python/lifecycle/test.sh
Line 170 in 5320009
rm -rf models/ |
Not sure if I should change existing design might impact other tests too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name the folder differently and copy from it to models
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tanmayv25 suggesnted I move all the changes from L0_lifecycle to a new L0_ dedicated for this feature going forward.
The reason I made changes here was, L0_lifecycle/models had models where we send errors pragmatically helped me reuse all the models.
Will resolve this comment along with the original comment by @tanmayv25 in a new PR after the cherrypick.
Keeping it unresolved for now will mark this resolved after the new PR
…) (#7555) Co-authored-by: Indrajit Bhosale <[email protected]>
What does the PR do?
Enhanced GRPC streaming to enable sending GRPC error codes if enabled by user.
User can enabled/disable this feature by including "grpc_strict":true as part of GRPC headers.
Checklist
<commit_type>: <Title>
Commit Type:
Check the conventional commit type
box here and add the label to the github PR.
Related PRs:
None
Where should the reviewer start?
ExtractStateFromHeaders() function and Process() function in stream_infer_handler.cc
Test plan:
Background
Customer request to have GRPC error codes as part of GRPC response in case of errors from core
TEP link : https://docs.google.com/document/d/1TfNAMYLsPuLrduBtAKo64YqV55IWQhcn6zayc_CBY18/edit?pli=1