Skip to content

Commit

Permalink
MCT tutorials readme - fix broken links
Browse files Browse the repository at this point in the history
  • Loading branch information
Idan-BenAmi committed Mar 21, 2024
1 parent 17efd59 commit 2f88b9d
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion FAQ.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ quantized_model = mct.keras_load_quantized_model('my_model.keras')

#### PyTorch

PyTorch models can be exported as onnx models. An example of loading a saved onnx model can be found [here](https://sony.github.io/model_optimization/api/api_docs/modules/exporter.html#use-exported-model-for-inference).
PyTorch models can be exported as onnx models. An example of loading a saved onnx model can be found [here](https://github.com/sony/model_optimization/blob/main/docs/api/experimental_api_docs/modules/exporter.html#use-exported-model-for-inference).

*Note:* Running inference on an ONNX model in the `onnxruntime` package has a high latency.
Inference on the target platform (e.g. the IMX500) is not affected by this latency.
Expand Down

0 comments on commit 2f88b9d

Please sign in to comment.