-
Notifications
You must be signed in to change notification settings - Fork 249
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump transformers version #3272
Bump transformers version #3272
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a look at the references, and it's looks ok.
Most of the changes coming from changing attention implementation using SDPA.
Search for pruning group doesn't support it and finds much lower number of groups.
@nikita-malininn applied WA for the failed tests, since the issue in the deprecated components:
nikita-malininn#7
@@ -71,7 +71,7 @@ class PTExporter(Exporter): | |||
This class provides export of the compressed model to the ONNX format. | |||
""" | |||
|
|||
_ONNX_DEFAULT_OPSET = 13 | |||
_ONNX_DEFAULT_OPSET = 14 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@AlexanderDokuchaev do we have some tests for torch export outside pre-commit scope?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nikita-malininn I found PTQ conformance tests with export:
ImageClassificationTimm
and ImageClassificationTorchvision
test cases in https://github.com/openvinotoolkit/nncf/blob/develop/tests/post_training/model_scope.py
IMO, It's worth launching it
WA for failed tests
### Changes - Bumped `transformers` version to `>=4.48.0` - Bumped `optimum-intel` version to `>=1.22.0` (as it requires `transformers`) - Bumped `optimum` version to `>=1.24.0` (as it requires `transformers`) - Updated statistics caching test with `position_ids` to fix: >Exception from src/plugins/intel_cpu/src/node.cpp:769: [CPU] Concat node with name '__module.model.model.decoder.layers.0.self_attn/aten::cat/Concat' Check 'TRShape::merge_into(output_shape, in_copy)' failed at src/core/shape_inference/include/concat_shape_inference.hpp:43: While validating node 'opset1::Concat __module.model.model.decoder.layers.0.self_attn/aten::cat/Concat (opset1::Parameter Parameter_753753[0]:f32[?,4,?,4], opset1::Transpose __module.model.model.decoder.layers.0.self_attn/aten::transpose/Transpose_1[0]:f32[?,4,?,4]) -> (f32[?,4,?,4])' with friendly_name '__module.model.model.decoder.layers.0.self_attn/aten::cat/Concat': Shape inference input shapes {{1,4,0,4},{0,4,0,4}} Argument shapes are inconsistent; they must have the same rank, and must have equal dimension everywhere except on the concatenation axis (axis 2). - Bumped `accelerate` version to `>=1.1.0` to fix: > NotImplementedError: data_seed requires Accelerate version `accelerate` >= 1.1.0. This is not supported and we recommend you to update your version. - Added `num_items_in_batch` to `compute_loss` to fix: > TypeError: CompressionTrainer.compute_loss() got an unexpected keyword argument 'num_items_in_batch' - Bumped `_ONNX_DEFAULT_OPSET` from 13 to 14 to fix: > torch.onnx.errors.UnsupportedOperatorError: Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 13 is not supported. Support for this operator was added in version 14, try exporting with this version. - Updated sparsity/pruning/nas references for tests. ### Reason for changes - Security issues with `transformers<4.48` ### Related tickets - N/A ### Tests - install - https://github.com/openvinotoolkit/nncf/actions/runs/13393410660 - passed - weight compression - https://github.com/openvinotoolkit/nncf/actions/runs/13393412141 - passed - examples - https://github.com/openvinotoolkit/nncf/actions/runs/13393412919 - passed - PTQ - manual/job/post_training_quantization/622/ - passed - PTWC - manual/job/post_training_weight_compression/327/ - passed --------- Co-authored-by: Nikolay Lyalyushkin <[email protected]>
Changes
Bumped
transformers
version to>=4.48.0
Bumped
optimum-intel
version to>=1.22.0
(as it requirestransformers
)Bumped
optimum
version to>=1.24.0
(as it requirestransformers
)Updated statistics caching test with
position_ids
to fix:Bumped
accelerate
version to>=1.1.0
to fix:Added
num_items_in_batch
tocompute_loss
to fix:Bumped
_ONNX_DEFAULT_OPSET
from 13 to 14 to fix:Updated sparsity/pruning/nas references for tests.
Reason for changes
transformers<4.48
Related tickets
Tests