-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance Example Application: [ 0] INTERNAL - No valid requests recorded within time interval. Please use a larger time window. #136
Comments
What version of the inference server are you using. It appears that you are likely using a version of the perf_client that is not compatible with the server. Tycially, because we are still in beta, you must use a perf_client with the same version as the server. |
@deadeyegoodwin I am using the tag nvcr.io/nvidia/tensorrtserver:19.02-py3, where I can set up the perf_client version that matches the server version? |
Where did you get the perf_client executable? |
I downloaded the TRTIS master branch to my server and built the tensorrtserver_clients docker image from there: |
The master branch is far ahead of the 19.02 branch. You should use the r19.02 branch and build the clients on that branch (since that matches your server). The documentation has recently been made more explicit about using the correct branch. You can see in the master branch docs: https://docs.nvidia.com/deeplearning/sdk/tensorrt-inference-server-master-branch-guide/docs/client.html#building-the-client-libraries-and-examples |
Issued solved!. @deadeyegoodwin thanks for updating the doc |
hi all, I am running the perf_client example applications and getting the below errors:
Server: 4xT4 GPUs
Docker image used for the server:
nvcr.io/nvidia/tensorrtserver:19.02-py3
Command used to run the client:
nvidia-docker run -it --rm --net=host --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 tensorrtserver_clients
Example 1:
Example 2:
Example 3:
On another hand, the TensorRT Inference Server is throwing these messages:
[libprotobuf ERROR external/protobuf_archive/src/google/protobuf/text_format.cc:307] Error parsing text-format nvidia.inferenceserver.InferRequestHeader: 1:79: Message type "nvidia.inferenceserver.InferRequestHeader" has no field named "id".
The text was updated successfully, but these errors were encountered: