-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OnnxRuntime_test_all fails when building with TensorRT #5232
Comments
We've upgraded TensorRT to 7.1.3.4. Please download the package and use it with CUDA11 and CuDNN 8 on Windows. |
@stevenlix Thanks for the suggestion. I was using the TensorRT 7.1.3.4 and got the error which I posted in the previous message. So do you mean that upgrading from CUDA 10.2 to CUDA 11 and that from cuDNN 7.6.5 to cuDNN 8 will solve this error? |
@stevenlix, Thanks for your reply, Steven. Mohit (@maynai-nv) at Nvidia is trying to build ONNXRT on Windows. His question is whether ORT-TRT 7.1.3.4 with CUDA 10.2 and cuDNN 7.6.5 on Windows verified working or not. |
On Windows, I thought TensorRT 7.1.3.4 is only available with CUDA 11.0 (at least that is what the Nvidia download page indicates) |
@jywu-msft and @stevenlix. thanks for the suggestion. I updated to CUDA 11.0, cudNN 8.0 and TensorRT 7.1.3.4 and still got the same error
|
can you attach the full output log? |
Please note that I submitted a bug report weeks ago that TensorRT was broken on Windows with the latest NVIDIA binaries (#4841). I have been waiting for a follow-up. Can someone confirm that TensorRT 7.1.3.4/Cuda11.0/CuDnn8 is now working correctly on Windows? |
@jywu-msft The following is the full ouptut log. |
" ----- MEMORY LEAKS: 216 bytes of memory leaked in 6 allocations" That's the reason. Please try to run 'onnxruntime_test_all' in visual studio, it will give you more information. |
@mayani-nv Thank you for the valuable information, we'll start to fix them. |
I have the exact same error message, but Im not building with TensorRT:
|
Please try to run 'onnxruntime_test_all' in visual studio, it will tell us which allocation leaked. |
@Linux13524 Could you please give us the onnxruntime commit id you used in the build? |
Hi @Linux13524 Thank you. I can reproduce the error but I think the latest master is fine. As a workaround, you may go to https://github.com/microsoft/onnxruntime/blob/master/tools/ci_build/build.py#L965 and set "-Donnxruntime_ENABLE_MEMLEAK_CHECKER=OFF" for all the cases. |
@snnn Thanks a lot for the info! I will try and see if the latest master does work, otherwise I will use your workaround. |
Latest master works. Thanks again @snnn! |
I followed the instructions described here to build with TensorRT and got the following error
System information
OS Platform and Distribution (e.g., Linux Ubuntu 18.04): WS 2016
Visual Studio : 2017 Community
ONNX Runtime installed from (source or binary): source
ONNX Runtime version: 1.7
Python version: 3.7
CUDA version: 10.1
cuDNN version: 7.6.5
GPU card: P40
Thanks in advance
The text was updated successfully, but these errors were encountered: