Skip to content

Commit

Permalink
Fix links in readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Ofir Gordon authored and Ofir Gordon committed Apr 2, 2024
1 parent 0f1027e commit 0062201
Show file tree
Hide file tree
Showing 2 changed files with 7 additions and 7 deletions.
8 changes: 4 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,8 +76,8 @@ You can customize data generation configurations to suit your specific needs. [G

### Quantization
MCT supports different quantization methods:
* Post-training quantization (PTQ): [Keras API](https://sony.github.io/model_optimization/docs/api/experimental_api_docs/methods/keras_post_training_quantization_experimental.html#ug-keras-post-training-quantization-experimental), [PyTorch API](https://sony.github.io/model_optimization/docs/api/experimental_api_docs/methods/pytorch_post_training_quantization_experimental.html#ug-pytorch-post-training-quantization-experimental)
* Gradient-based post-training quantization (GPTQ): [Keras API](https://sony.github.io/model_optimization/docs/api/experimental_api_docs/methods/keras_gradient_post_training_quantization_experimental.html#ug-keras-gradient-post-training-quantization-experimental), [PyTorch API](https://sony.github.io/model_optimization/docs/api/experimental_api_docs/methods/pytorch_gradient_post_training_quantization_experimental.html#ug-pytorch-gradient-post-training-quantization-experimental)
* Post-training quantization (PTQ): [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_post_training_quantization.html), [PyTorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_post_training_quantization.html)
* Gradient-based post-training quantization (GPTQ): [Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_gradient_post_training_quantization.html), [PyTorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_gradient_post_training_quantization.html)
* Quantization-aware training (QAT) [*](#experimental-features)


Expand Down Expand Up @@ -120,8 +120,8 @@ taking into account the target platform's Single Instruction, Multiple Data (SIM
By pruning groups of channels (SIMD groups), our approach not only reduces model size
and complexity, but ensures that better utilization of channels is in line with the SIMD architecture
for a target Resource Utilization of weights memory footprint.
[Keras API](https://sony.github.io/model_optimization/docs/api/experimental_api_docs/methods/keras_pruning_experimental.html)
[Pytorch API](https://github.com/sony/model_optimization/blob/main/model_compression_toolkit/pruning/pytorch/pruning_facade.py#L43)
[Keras API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/keras_pruning_experimental.html)
[Pytorch API](https://sony.github.io/model_optimization/docs/api/api_docs/methods/pytorch_pruning_experimental.html)

#### Experimental features

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ in some operator for its weights/activations, fusing patterns, etc.)
## Supported Target Platform Models

Currently, MCT contains three target-platform models
(new models can be created and used by users as demonstrated [here](https://sony.github.io/model_optimization/docs/api/experimental_api_docs/modules/target_platform.html#targetplatformmodel-code-example)):
(new models can be created and used by users as demonstrated [here](https://sony.github.io/model_optimization/docs/api/api_docs/modules/target_platform.html#targetplatformmodel-code-example)):
- [IMX500](https://developer.sony.com/develop/imx500/)
- [TFLite](https://www.tensorflow.org/lite/performance/quantization_spec)
- [QNNPACK](https://github.com/pytorch/QNNPACK)
Expand All @@ -27,7 +27,7 @@ One may view the full default target-platform model and its parameters [here](ht

## Usage

The simplest way to initiate a TPC and use it in MCT is by using the function [get_target_platform_capabilities](https://sony.github.io/model_optimization/docs/api/experimental_api_docs/methods/get_target_platform_capabilities.html#ug-get-target-platform-capabilities).
The simplest way to initiate a TPC and use it in MCT is by using the function [get_target_platform_capabilities](https://sony.github.io/model_optimization/docs/api/api_docs/methods/get_target_platform_capabilities.html#ug-get-target-platform-capabilities).

For example:

Expand All @@ -50,4 +50,4 @@ quantized_model, quantization_info = mct.ptq.keras_post_training_quantization(Mo

Similarly, you can retrieve IMX500, TFLite and QNNPACK target-platform models for Keras and PyTorch frameworks.

For more information and examples, we highly recommend you to visit our [project website](https://sony.github.io/model_optimization/docs/api/experimental_api_docs/modules/target_platform.html#ug-target-platform).
For more information and examples, we highly recommend you to visit our [project website](https://sony.github.io/model_optimization/docs/api/api_docs/modules/target_platform.html#ug-target-platform).

0 comments on commit 0062201

Please sign in to comment.