-
-
Notifications
You must be signed in to change notification settings - Fork 16.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
convert yolov5 to openvino #891
Comments
Have u guys tried:
to simplify your onnx model? This is really helpful to eliminate unsupported ops and simplify whole onnx model structure. If you guys can simplify your onnx model pls let me know. |
it is in Line 26 in 5e0b90d
change it before train. Then using pre-trained model or self-train directly. |
@0806gcx As I mentioned above, the CPU mode traced ONNX model somewhat hard to convert to other framework such as TensorRT and OpenVINO. It can not even pass test in onnxruntime. @glenn-jocher Simply verify it with onnx-simplifier. |
@jinfagang |
@0806gcx It says model have a concat error in a Cocat node. dimension not match. |
@0806gcx I think @glenn-jocher Add something and break this model, I previous using v1.0 and v2.0 can also sucessfully export onnx and convert to tensorrt very quickly, and now it breaks. Since this repo's author doesn't care much about cuda side (he always export in cpu mode) so that he doesn't realize this bug. But indeed, v3.0 exported onnx doesn't work any longer, it has some wrong Concat connnection and I can not even export it successfully in CUDA mode. |
@linhaoqi027 @0806gcx @jinfagang unfortunately I don't have any openvino experience so I can't help here, but I think it would be worth raising this issue directly on the openvino repository since the errors are generated there. The scope of the YOLOv5 repository as it stands now is limited to basic export to ONNX, Torchscript and CoreML. These 3 exports are currently working correctly when run in verified environments that meet all requirements.txt dependencies using the official models. 3rd party implementations that import exported models are not under our control and are beyond the scope of our support. |
@glenn-jocher I solved the problem on cuda export, and it now also correct deploy to tensorrt. |
I met similar bug too. using openvino2020R4 |
Did your version is torch==1.5.1,torchvision==0.6.1 ? |
是的 |
I didn't meet this bug.May be it is cause by yolov5 version.I used yolov5_3 |
@jinfagang what exactly did you do to convert v3.0 to tensorrt. Can you please throw some light on that? |
@makaveli10 I am simply convert to onnx, not inference via tensorrt, however seems here is an out-of-box example: http://manaai.cn/aisolution_detail.html?id=5 |
@jinfagang so you didnt convert v3.0 to tensorrt. but able to convert to onnx? |
@linhaoqi027 I successfully converted the I trained the model on 2 classes so I expected network values like What I mean is that I expect the last result to be 0 or 1 (for two classes) and the 4th value to be between 0 to 1 for confidence. How can I solve this issue? Please explain in details. |
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
The above steps worked for me when I revert back to commit c889267 |
@JoshChristie @linhaoqi027 |
Yes, I see a noticeable decrease in inference time after converting the model to openvino for use on a CPU. |
@JoshChristie |
@adityap27 CPU info i7-8750H CPU @ 2.20GHz Model:
~ 1.2mill parameters |
@JoshChristie I am getting same latency for both pytorch and openvino. |
@adityap27 Updated my previous comment. |
@JoshChristie
|
|
The nms is not included in openvino.So i think the main time waste is in nms. I convert output(numpy) to pytorch than use tensor to do nms.So I think that cause main time waste. |
@linhaoqi027
|
@JoshChristie can you help with these info ? |
|
having same size issue when trying to convert from onnx to ie |
you can refer to https://github.com/linhaoqi027/yolov5_openvino_sdk. |
Hi, very hanppy for your information! I use yolov5s, v3, img size is 640*640, one thread, test openvino(2020R4) time is 1.28s, and pytorch one thread time is 1.2s, it's too slow. |
With one thread your results make sense to me. My image size was 288x512. |
Please refer the document in https://cdrdv2.intel.com/v1/dl/getContent/633562 Thanks. |
thanks a lot, your answer work for me |
@linhaoqi027 @violet17 @feolcn @JoshChristie @mengqingmeng @Belinda-great good news 😃! Your original issue may now be fixed ✅ in PR #6057. This PR adds native YOLOv5 OpenVINO export: python export.py --weights yolov5s.pt --include openvino # export to OpenVINO To receive this update:
Thank you for spotting this issue and informing us of the problem. Please let us know if this update resolves the issue for you, and feel free to inform us of any other issues you discover or feature requests that come to mind. Happy trainings with YOLOv5 🚀! |
Before You Start
train your yolov5 model on your own dataset following Train Custom Data
yolov5/models/common.py
Line 26 in 5e0b90d
export model to onnx,following #251
before run export.py:
yolov5/models/yolo.py
Lines 49 to 53 in 5e0b90d
yolov5/models/export.py
Line 31 in 5e0b90d
yolov5/models/export.py
Lines 51 to 52 in 5e0b90d
to opset_version=10.Because only opset=10 support resize ops.
then you can run export.py to export onnx model.
install OPENVINO2020R4
you can install following https://bbs.cvmart.net/topics/3117
convert onnx to openvino through
then you can get openvino model .bin and .xml.
WARNING:NMS is not included in openvino model.
AS for how to use openvino to inference ,please click my profile or https://github.com/linhaoqi027/yolov5_openvino_sdk
The text was updated successfully, but these errors were encountered: