Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance Example Application: [ 0] INTERNAL - No valid requests recorded within time interval. Please use a larger time window. #136

Closed
vilmara opened this issue Mar 7, 2019 · 6 comments

Comments

@vilmara
Copy link

vilmara commented Mar 7, 2019

hi all, I am running the perf_client example applications and getting the below errors:

Server: 4xT4 GPUs
Docker image used for the server: nvcr.io/nvidia/tensorrtserver:19.02-py3
Command used to run the client: nvidia-docker run -it --rm --net=host --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 tensorrtserver_clients

Example 1:

root@R7425-T4:/workspace# /opt/tensorrtserver/bin/perf_client -m resnet50_netdef -p3000 -t4 -v
*** Measurement Settings ***
  Batch size: 1
  Measurement window: 3000 msec

Request concurrency: 4
[ 0] INTERNAL - No valid requests recorded within time interval. Please use a larger time window.
Thread [0] had error: [inference:0 0] INVALID_ARG - unable to parse request for model 'resnet50_netdef'
Thread [1] had error: [inference:0 0] INVALID_ARG - unable to parse request for model 'resnet50_netdef'
Thread [2] had error: [inference:0 0] INVALID_ARG - unable to parse request for model 'resnet50_netdef'
Thread [3] had error: [inference:0 0] INVALID_ARG - unable to parse request for model 'resnet50_netdef'

Example 2:

root@R7425-T4:/workspace# /opt/tensorrtserver/bin/perf_client -m resnet50_netdef -d -c8 -l200 -p5000 -b8
*** Measurement Settings ***
  Batch size: 8
  Measurement window: 5000 msec
  Latency limit: 200 msec
  Concurrency limit: 8 concurrent requests

Request concurrency: 1
[ 0] INTERNAL - No valid requests recorded within time interval. Please use a larger time window.
Thread [0] had error: [inference:0 0] INVALID_ARG - unable to parse request for model 'resnet50_netdef'

Example 3:

root@R7425-T4:/workspace# /opt/tensorrtserver/bin/perf_client -m resnet50_netdef -p3000 -d -l50 -c 3
*** Measurement Settings ***
  Batch size: 1
  Measurement window: 3000 msec
  Latency limit: 50 msec
  Concurrency limit: 3 concurrent requests

Request concurrency: 1
[ 0] INTERNAL - No valid requests recorded within time interval. Please use a larger time window.
Thread [0] had error: [inference:0 0] INVALID_ARG - unable to parse request for model 'resnet50_netdef'

On another hand, the TensorRT Inference Server is throwing these messages:
[libprotobuf ERROR external/protobuf_archive/src/google/protobuf/text_format.cc:307] Error parsing text-format nvidia.inferenceserver.InferRequestHeader: 1:79: Message type "nvidia.inferenceserver.InferRequestHeader" has no field named "id".

@deadeyegoodwin
Copy link
Contributor

What version of the inference server are you using. It appears that you are likely using a version of the perf_client that is not compatible with the server. Tycially, because we are still in beta, you must use a perf_client with the same version as the server.

@vilmara
Copy link
Author

vilmara commented Mar 7, 2019

@deadeyegoodwin I am using the tag nvcr.io/nvidia/tensorrtserver:19.02-py3, where I can set up the perf_client version that matches the server version?

@deadeyegoodwin
Copy link
Contributor

Where did you get the perf_client executable?

@vilmara
Copy link
Author

vilmara commented Mar 7, 2019

I downloaded the TRTIS master branch to my server and built the tensorrtserver_clients docker image from there:
docker build -t tensorrtserver_clients --target trtserver_build --build-arg "BUILD_CLIENTS_ONLY=1" .

@deadeyegoodwin
Copy link
Contributor

deadeyegoodwin commented Mar 7, 2019

The master branch is far ahead of the 19.02 branch. You should use the r19.02 branch and build the clients on that branch (since that matches your server).

The documentation has recently been made more explicit about using the correct branch. You can see in the master branch docs: https://docs.nvidia.com/deeplearning/sdk/tensorrt-inference-server-master-branch-guide/docs/client.html#building-the-client-libraries-and-examples

@vilmara
Copy link
Author

vilmara commented Mar 8, 2019

Issued solved!. @deadeyegoodwin thanks for updating the doc

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants