-
Notifications
You must be signed in to change notification settings - Fork 60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Preparing TPC for weights per-attribute quantization #925
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
… parts (in keras default tpc and in set node config)
* fix set config to node issues with nodes that doesn't have kernel
elad-c
reviewed
Jan 22, 2024
model_compression_toolkit/core/common/quantization/set_node_quantization_config.py
Outdated
Show resolved
Hide resolved
model_compression_toolkit/core/common/quantization/set_node_quantization_config.py
Outdated
Show resolved
Hide resolved
model_compression_toolkit/core/common/quantization/set_node_quantization_config.py
Outdated
Show resolved
Hide resolved
...l_compression_toolkit/target_platform_capabilities/target_platform/op_quantization_config.py
Outdated
Show resolved
Hide resolved
...l_compression_toolkit/target_platform_capabilities/target_platform/op_quantization_config.py
Outdated
Show resolved
Hide resolved
...l_compression_toolkit/target_platform_capabilities/target_platform/op_quantization_config.py
Outdated
Show resolved
Hide resolved
...arget_platform_capabilities/target_platform/targetplatform2framework/operations_to_layers.py
Outdated
Show resolved
Hide resolved
...atform_capabilities/target_platform/targetplatform2framework/target_platform_capabilities.py
Outdated
Show resolved
Hide resolved
model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v1/tp_model.py
Show resolved
Hide resolved
model_compression_toolkit/target_platform_capabilities/tpc_models/imx500_tpc/v1/tpc_keras.py
Outdated
Show resolved
Hide resolved
… modify tests TPCs and update the function that doing the connection)
...l_compression_toolkit/target_platform_capabilities/target_platform/op_quantization_config.py
Show resolved
Hide resolved
...l_compression_toolkit/target_platform_capabilities/target_platform/op_quantization_config.py
Show resolved
Hide resolved
...l_compression_toolkit/target_platform_capabilities/target_platform/op_quantization_config.py
Show resolved
Hide resolved
...l_compression_toolkit/target_platform_capabilities/target_platform/op_quantization_config.py
Outdated
Show resolved
Hide resolved
...atform_capabilities/target_platform/targetplatform2framework/target_platform_capabilities.py
Outdated
Show resolved
Hide resolved
model_compression_toolkit/core/common/quantization/node_quantization_config.py
Outdated
Show resolved
Hide resolved
tests/keras_tests/feature_networks_tests/test_features_runner.py
Outdated
Show resolved
Hide resolved
elad-c
approved these changes
Jan 31, 2024
...l_compression_toolkit/target_platform_capabilities/target_platform/op_quantization_config.py
Show resolved
Hide resolved
...l_compression_toolkit/target_platform_capabilities/target_platform/op_quantization_config.py
Outdated
Show resolved
Hide resolved
reuvenperetz
approved these changes
Jan 31, 2024
lior-dikstein
pushed a commit
that referenced
this pull request
Feb 25, 2024
* Weights configuration in OpQuantizationConfig are extracted to a new class named AttributeQuantizationConfig which holds the weights quantization configuration per-attribute. * Each OpQuantizationConfig now includes a default_attribute_config and an attributes_config_mapping which maps an attribute to the attribute's specific quantization configuration. The default config is then used to quantize all non-specified weight attributes. * By default, we add Kernel and Bias attributes to all our TP models base op config. The kernel is quantized similarly to the way we have quantized weights so far. The bias quantization is disabled. * To enable attribute quantization with specific config per attribute, we created a mapping mechanism between a general attribute name (e.g., "KERNEL_ATTR") to this attribute name in the framework --------- Co-authored-by: Ofir Gordon <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Pull Request Description:
Preparing the ground for all weight attributes quantization in the TPC:
To enable attribute quantization with specific config per attribute, we created a mapping mechanism between a general attribute name (e.g., "KERNEL_ATTR") to this attribute name in the framework (e.g., "kernel" for keras Conv2d or "depthwise_kernel" for keras DepthwiseConv2d). This mapping is activated once initializing the TPC (in the framework).
Additional modifications:
Checklist before requesting a review: