-
Notifications
You must be signed in to change notification settings - Fork 3k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[CPU EP] Add blocked quantization to QuantizeLinear op kernel (#20977)
### Description Add blocked quantization to QuantizeLinear op kernel. If the quantize axis is not the last axis, block the tensor using 1x128 blocks. Blocks are dispatched to multiple threads for concurrently processing. Currently only support scalar instructions. If the quantize axis is the last axis, block the tensor using 1 x quant_block_size blocks. Blocks are dispatched to multiple threads for concurrent processing. If output type is int types, call mlas kernel to use the SIMD instructions in each block. #### Benchmark data 20 core 2GHz CPU, RelWithDebInfo config, 196 x 4096 tensor, quantize float to int4x2 Quantize before last axis: * single thread, scalar instruction: 31380900 ns * 8 thread, scalar instruction: 5098620 ns Quantize last axis: * single thread, scalar instruction: 27927900 ns * 8 thread, SIMD instruction: 102261 ns more thread, SIMD instruction, larger block size helps ### Motivation and Context ONNX added blocked quantization to QuantizeLinear in optset 21
- Loading branch information
1 parent
17d5dc5
commit 9be3034
Showing
4 changed files
with
1,957 additions
and
34 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.