You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
使用 yolov8 自带接口导出onnx模型,再使用 trtexec 导出到 tensorRT 模型,出现以下警告,那这个导出的模型,是否还可以正常使用。
onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
The text was updated successfully, but these errors were encountered:
使用 yolov8 自带接口导出onnx模型,再使用 trtexec 导出到 tensorRT 模型,出现以下警告,那这个导出的模型,是否还可以正常使用。 onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
@YuHe0108 can you please help me on how you setup your jetson nano to work with yolov8 and tensorrt?
I already tried using venv on python 3.8 and even a docker image. Both options still leads to tensorrt not being found or detected, cuda is also not working.
I can still use tensorrt and even export onnx file to trt/engine only when at python 3.6 environment.
使用 yolov8 自带接口导出onnx模型,再使用 trtexec 导出到 tensorRT 模型,出现以下警告,那这个导出的模型,是否还可以正常使用。
onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
The text was updated successfully, but these errors were encountered: