diff --git a/README.md b/README.md index e0f0ff581..e5469dff2 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@

- +

______________________________________________________________________ @@ -67,7 +67,7 @@ For further details, please see [Supported features and algorithms](#high-level-

- +

@@ -148,16 +148,16 @@ Currently, MCT is being tested on various Python, Pytorch and TensorFlow version ##
Results

- - - - + + + + MCT can quantize an existing 32-bit floating-point model to an 8-bit fixed-point (or less) model without compromising accuracy. Below is a graph of [MobileNetV2](https://pytorch.org/vision/main/models/generated/torchvision.models.mobilenet_v2.html) accuracy on ImageNet vs average bit-width of weights (X-axis), using **single-precision** quantization, **mixed-precision** quantization, and mixed-precision quantization with GPTQ.

- + For more results, please see [1]