Skip to content

Commit

Permalink
update default values for weight quatization (#1564)
Browse files Browse the repository at this point in the history
  • Loading branch information
askhade authored Aug 6, 2019
1 parent 7ee8aca commit 16087f3
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion onnxruntime/python/tools/quantization/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ onnx.save(quantized_model, 'path/to/the/quantized_model.onnx')
See below for a description of all the options to quantize():

- **model**: ModelProto to quantize
- **per_channel**: *default: True*
- **per_channel**: *default: False*
If True, weights of Conv nodes are quantized per output channel.
If False, they are quantized per tensor. Refer [QLinearConv](https://github.com/onnx/onnx/blob/master/docs/Operators.md#qlinearconv) for more information.
- **nbits**: *default: 8*
Expand Down
2 changes: 1 addition & 1 deletion onnxruntime/python/tools/quantization/quantize.py
Original file line number Diff line number Diff line change
Expand Up @@ -990,7 +990,7 @@ def _quantize_matmul(self, node, new_nodes_list):
return [node]


def quantize(model, per_channel=True, nbits=8, quantization_mode=QuantizationMode.IntegerOps,
def quantize(model, per_channel=False, nbits=8, quantization_mode=QuantizationMode.IntegerOps,
static=False, asymmetric_input_types=False, input_quantization_params=None, output_quantization_params=None):
'''
Given an onnx model, create a quantized onnx model and save it into a file
Expand Down

0 comments on commit 16087f3

Please sign in to comment.