-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Unsupport type" error in running an onnx model #75
Comments
This might be because you're using the output-stationary mode (OS). The residual additions only work when you're in weight-stationary mode (WS). We'll fix that later to work with both |
@hngenc So it can be considered as a bug? I just run the ort_test in onnxruntime-riscv/systolic_runner/imagenet_runner with this model. I don't find where I can set the OS mode or WS model. Should it be automatically selected when running "session.Run(Ort::RunOptions{nullptr}, input_node_names.data(), &input_tensor, 1, output_node_names.data(), 1);" in the runner.cpp? |
Can you try this command instead?
The |
@hngenc Ok, I see. the inference success now. Called into systolic conv And I want to know if I try to transplant some other model to gemmini, is it possible that the onnx model doesn't work on both mode? |
When I run this command, I get these predictions instead:
Maybe there's an issue with your ONNX model? Try using this model:
After downloading that model, I ran this command:
|
@hngenc Hi, a quick question ! |
@ybai62868 Can you check which branch of Onnx-Runtime you're on? We saw a similar error to that on previous versions, but I think we fixed it on the |
I'm getting the same error on the 2021-12-23 branch. |
@Teut0nic Are you getting that error when using Onnx-Runtime to convert an FP32 model to Int8? If so, can you link the FP32 model that's giving you the error? It's probably just not on the right Onnx opset version. I can try to fix that. |
Hey Hasan, I've done some tweaking on the version of GEMMINI that was the latest as of Sep 19, 2022. I have a list of changes that I came up with. I would be more than happy to email them to you. |
Great; looking forward to taking a look at your changes! |
Describe the bug
I am trying to use run onnx models on gemmini by spike. I have successfully run the inference on googlenet_quantized.onnx and mobilenet_quantized.onnx, but failed in [resnet50_opt_quant.onnx] and suggest 'Unsupport type' when 'Called into systolic add'.
In CmakeLists.txt, I set the fp32 off and int8 on.
What is the problem?
Urgency
none
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Linux Ubuntu 18.04
ONNX Runtime installed from (source or binary): source
ONNX Runtime version: branch 2021-12-23
Python version:3.6.9
Visual Studio version (if applicable):
GCC/Compiler version (if compiling from source):
CUDA/cuDNN version:
GPU model and memory:
To Reproduce
Describe steps/code to reproduce the behavior.
I change the CMakeList to fp32 off and int8 on, rebuild the ORT and ORT_TEST
spike --extension=gemmini pk ort_test -m resnet50_opt_quant.onnx -i images/cat.jpg -p caffe2 -x 1 -O 0
Attach the ONNX model to the issue (where applicable) to expedite investigation.
The released model :resnet50_opt_quant.onnx
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
Additional context
The text was updated successfully, but these errors were encountered: