This example is used to demonstrate how to use default user-facing APIs to quantize a model.
pip install -r requirements.txt
Note: Validated TensorFlow Version.
TensorFlow models repo provides scripts and instructions to download, process and convert the ImageNet dataset to the TF records format. We also prepared related scripts in TF image_recognition example.
wget https://storage.googleapis.com/intel-optimized-tensorflow/models/v1_6/mobilenet_v1_1.0_224_frozen.pb
- Run quantization
python test.py --tune --dataset_location=/path/to/imagenet/
- Run benchmark, please make sure benchmark the model should after tuning.
python test.py --benchmark --dataset_location=/path/to/imagenet/
- We only need to add the following lines for quantization to create an int8 model.
from neural_compressor import Metric
top1 = Metric(name="topk", k=1)
quantized_model = fit(
model="./mobilenet_v1_1.0_224_frozen.pb",
conf=config,
calib_dataloader=calib_dataloader,
eval_dataloader=eval_dataloader,
eval_metric=top1)
tf.io.write_graph(graph_or_graph_def=quantized_model.model,
logdir='./',
name='int8.pb',
as_text=False)
- Run benchmark, use self defined eval_func to test accuracy and performance.
# Optional, run benchmark
from neural_compressor.model import Model
evaluate(Model('./int8.pb'))