Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Unsupport type" error in running an onnx model #75

Open
Heresyrac opened this issue Apr 10, 2022 · 11 comments
Open

"Unsupport type" error in running an onnx model #75

Heresyrac opened this issue Apr 10, 2022 · 11 comments

Comments

@Heresyrac
Copy link

Describe the bug
I am trying to use run onnx models on gemmini by spike. I have successfully run the inference on googlenet_quantized.onnx and mobilenet_quantized.onnx, but failed in [resnet50_opt_quant.onnx] and suggest 'Unsupport type' when 'Called into systolic add'.
In CmakeLists.txt, I set the fp32 off and int8 on.
What is the problem?
Urgency
none
System information

  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Linux Ubuntu 18.04

  • ONNX Runtime installed from (source or binary): source

  • ONNX Runtime version: branch 2021-12-23

  • Python version:3.6.9

  • Visual Studio version (if applicable):

  • GCC/Compiler version (if compiling from source):

  • CUDA/cuDNN version:

  • GPU model and memory:

To Reproduce

  • Describe steps/code to reproduce the behavior.
    I change the CMakeList to fp32 off and int8 on, rebuild the ORT and ORT_TEST
    spike --extension=gemmini pk ort_test -m resnet50_opt_quant.onnx -i images/cat.jpg -p caffe2 -x 1 -O 0

  • Attach the ONNX model to the issue (where applicable) to expedite investigation.
    The released model :resnet50_opt_quant.onnx
    Expected behavior
    A clear and concise description of what you expected to happen.

Screenshots
1

Additional context

@hngenc
Copy link
Member

hngenc commented Apr 10, 2022

This might be because you're using the output-stationary mode (OS).

The residual additions only work when you're in weight-stationary mode (WS).

We'll fix that later to work with both

@Heresyrac
Copy link
Author

@hngenc So it can be considered as a bug? I just run the ort_test in onnxruntime-riscv/systolic_runner/imagenet_runner with this model. I don't find where I can set the OS mode or WS model. Should it be automatically selected when running "session.Run(Ort::RunOptions{nullptr}, input_node_names.data(), &input_tensor, 1, output_node_names.data(), 1);" in the runner.cpp?

@hngenc
Copy link
Member

hngenc commented Apr 15, 2022

Can you try this command instead?

spike --extension=gemmini pk ort_test -m resnet50_opt_quant.onnx -i images/dog.jpg  -p caffe2 -x 2 -O 99

The -x option chooses the dataflow. The -O option chooses what level of optimizations to enable.

@Heresyrac
Copy link
Author

Heresyrac commented Apr 16, 2022

@hngenc Ok, I see. the inference success now.
But the result seems doesn't make sense, the result should be a dog. What's the problem? I have tried the 3 different processing styles, and the results are still same.

Called into systolic conv
Called into systolic conv
Called into systolic conv
Called into systolic add
Element count 1000. Top 5 classes:
0.012803 rule, ruler
0.020608 mailbag, postbag
0.140059 tray
0.162265 dough
0.552930 cup
Done! Inference took 238882091 cycles

And I want to know if I try to transplant some other model to gemmini, is it possible that the onnx model doesn't work on both mode?
Should it theoretically be possible to use different mode at each layer? Does the Fix mean that the mode can be automatically selected in next version?

@hngenc
Copy link
Member

hngenc commented May 1, 2022

When I run this command, I get these predictions instead:

0.031456 giant schnauzer
0.075702 curly-coated retriever
0.087432 Great Dane
0.271946 Labrador retriever
0.361813 Rottweiler

Maybe there's an issue with your ONNX model? Try using this model:

wget https://github.com/ucb-bar/onnxruntime-riscv/releases/download/v0.01/resnet50_opt_quant.onnx

After downloading that model, I ran this command:

spike --extension=gemmini pk ort_test -m resnet50_opt_quant.onnx -i images/dog.jpg  -p caffe2 -x 2 -O 99

@ybai62868
Copy link

@hngenc Hi, a quick question !
I am also reproducing your resnet50_opt_quant.onnx result. I want to know how to get this model by the onnxruntime-riscv. I download the pre-trained model with fp32 from hugging face. And then use "onnxruntime-riscv/systolic_runner/quantization/optimize.py and calibrate.by" to get the quantization model.
Finally, i use spike to get the results. But it does not work when I try lots of opset_version from onnx (9,10,11,12)
The error information is reported below:
Terminate called after throwing an instance of "Ort::Exception" what(): Could not find an implementation fro the node Conv_0_quant:QLinearConv(10)
After using netron to open the model I use and comparing with the model from yours (resnet50_opt_quant.onnx ). I find the first conv layer was re-named by the Conv_0_quant:QLinearConv(10) in our model.
Therefore, how to solve this problem?

@hngenc
Copy link
Member

hngenc commented Sep 13, 2022

@ybai62868 Can you check which branch of Onnx-Runtime you're on? We saw a similar error to that on previous versions, but I think we fixed it on the 2021-12-23 branch

@Teut0nic
Copy link

@ybai62868 Can you check which branch of Onnx-Runtime you're on? We saw a similar error to that on previous versions, but I think we fixed it on the 2021-12-23 branch

I'm getting the same error on the 2021-12-23 branch.

@hngenc
Copy link
Member

hngenc commented Nov 23, 2022

@Teut0nic Are you getting that error when using Onnx-Runtime to convert an FP32 model to Int8?

If so, can you link the FP32 model that's giving you the error? It's probably just not on the right Onnx opset version. I can try to fix that.

@Teut0nic
Copy link

Hey Hasan,

I've done some tweaking on the version of GEMMINI that was the latest as of Sep 19, 2022. I have a list of changes that I came up with. I would be more than happy to email them to you.

@hngenc
Copy link
Member

hngenc commented Jan 21, 2023

Great; looking forward to taking a look at your changes!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants