-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove uneeded *py tutorials and edit the tutorials readm #1002
Conversation
tutorials/README.md
Outdated
## Getting started | ||
This "hello world" notebook shows how to quickly quantize a pre-trained model using MCT post training quantization technique both for Keras models and Pytorch models. | ||
- [Keras MobileNetV2 post training quantization](notebooks/keras/ptq/example_keras_imagenet.ipynb) | ||
- [Pytorch MobileNetV2 post training quantization](notebooks/pytorch/ptq/example_pytorch_mobilenet_v2.py) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why link to .py?
tutorials/README.md
Outdated
- [Pytorch MobileNetV2 post training quantization](notebooks/pytorch/ptq/example_pytorch_mobilenet_v2.py) | ||
|
||
## MCT Features | ||
In this section, we will cover more advanced topics related to quantization. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"This set of tutorials covers all the quantization tools provided by MCT. The notebooks in this section demonstrate how to configure and run simple and advanced post-training quantization methods. This includes..."
tutorials/README.md
Outdated
In this section, we will cover more advanced topics related to quantization. | ||
This includes fine-tuning PTQ (Post-Training Quantization) configurations, exporting models, | ||
and exploring advanced compression techniques. | ||
These techniques are crucial for optimizing models further and achieving better performance in deployment scenarios. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
change crucial to benefitial and rephrase (try to consult chatgpt maybe for making it a little more shiny)
## Quantization for Sony-IMX500 deployment | ||
This section provides a guide on quantizing pre-trained models to meet specific constraints for deployment on the | ||
processing platform. Our focus will be on quantizing models for deployment on [Sony-IMX500](https://developer.sony.com/imx500/) processing platform. | ||
We will cover various tasks and demonstrate the necessary steps to achieve efficient quantization for optimal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not a tutorial but an introduction, the future tense here is mistaken.
|
||
Here we provide examples on quantizing pre-trained models for deployment on Sony-IMX500 processing platform. | ||
We will cover various tasks and demonstrate the necessary steps to achieve efficient quantization for optimal | ||
deployment performance. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
highlight that the exported models from these tutorials are ready to be deployed on IMX500! (plug-and-play)
Pull Request Description:
Remove unneeded tutorials and edit the tutorials readme
Checklist before requesting a review: