Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yolov8 onnx->tensorRT #285

Open
YuHe0108 opened this issue Feb 14, 2025 · 2 comments
Open

yolov8 onnx->tensorRT #285

YuHe0108 opened this issue Feb 14, 2025 · 2 comments

Comments

@YuHe0108
Copy link

使用 yolov8 自带接口导出onnx模型,再使用 trtexec 导出到 tensorRT 模型,出现以下警告,那这个导出的模型,是否还可以正常使用。
onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

@triple-Mu
Copy link
Owner

使用 yolov8 自带接口导出onnx模型,再使用 trtexec 导出到 tensorRT 模型,出现以下警告,那这个导出的模型,是否还可以正常使用。 onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.

没有影响,可以正常使用

@ejb27
Copy link

ejb27 commented Feb 24, 2025

@YuHe0108 can you please help me on how you setup your jetson nano to work with yolov8 and tensorrt?

I already tried using venv on python 3.8 and even a docker image. Both options still leads to tensorrt not being found or detected, cuda is also not working.

I can still use tensorrt and even export onnx file to trt/engine only when at python 3.6 environment.

Any help is greatly appreciated.

Device: Jetson Nano 4GB Dev Kit

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants