diff --git a/README.md b/README.md index e0f0ff581..e5469dff2 100644 --- a/README.md +++ b/README.md @@ -1,7 +1,7 @@
-
+
-
+
-
-
-
-
+
+
+
+
MCT can quantize an existing 32-bit floating-point model to an 8-bit fixed-point (or less) model without compromising accuracy.
Below is a graph of [MobileNetV2](https://pytorch.org/vision/main/models/generated/torchvision.models.mobilenet_v2.html) accuracy on ImageNet vs average bit-width of weights (X-axis), using **single-precision** quantization, **mixed-precision** quantization, and mixed-precision quantization with GPTQ.
-
+
For more results, please see [1]