Skip to content

Commit

Permalink
Fix PR comments
Browse files Browse the repository at this point in the history
  • Loading branch information
Idan-BenAmi committed Nov 12, 2023
1 parent 7a20862 commit c8ec6c6
Show file tree
Hide file tree
Showing 6 changed files with 67 additions and 280 deletions.
18 changes: 9 additions & 9 deletions tutorials/notebooks/example_keras_nanodet_plus.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,7 @@
"## Floating Point Model\n",
"\n",
"### Load the pre-trained weights of Nanodet-Plus\n",
"We begin by loading the pre-trained weights of `nanodet-plus-m-1.5x-416` using `torch.load`, as the original model is in PyTorch format. Please make sure to download the pretrained weights from [here](https://github.com/RangiLyu/nanodet#model-zoo) into the 'content' directory on your drive, otherwise, specify the correct file path."
"We begin by loading the pre-trained weights of `nanodet-plus-m-1.5x-416` using `torch.load`, as the original model is in PyTorch format. Please make sure to download the pretrained weights from [here](https://github.com/RangiLyu/nanodet#model-zoo) and upload them into the '/content' folder on your drive, otherwise, specify the correct file path."
]
},
{
Expand Down Expand Up @@ -267,15 +267,15 @@
"source": [
"import model_compression_toolkit as mct\n",
"\n",
"TRAIN_DATASET_FOLDER = '/content/coco/val2017'\n",
"TRAIN_DATASET_ANNOTATION_FILE = '/content/coco/annotations/instances_val2017.json'\n",
"REPRESENTATIVE_DATASET_FOLDER = '/content/coco/val2017'\n",
"REPRESENTATIVE_DATASET_ANNOTATION_FILE = '/content/coco/annotations/instances_val2017.json'\n",
"n_iters = 20\n",
"\n",
"# Load COCO train set\n",
"train_dataset = coco_dataset_generator(dataset_folder=TRAIN_DATASET_FOLDER,\n",
" annotation_file=TRAIN_DATASET_ANNOTATION_FILE,\n",
" preprocess=nanodet_preprocess,\n",
" batch_size=BATCH_SIZE)\n",
"# Load COCO representative dataset\n",
"representative_dataset = coco_dataset_generator(dataset_folder=REPRESENTATIVE_DATASET_FOLDER,\n",
" annotation_file=REPRESENTATIVE_DATASET_ANNOTATION_FILE,\n",
" preprocess=nanodet_preprocess,\n",
" batch_size=BATCH_SIZE)\n",
"\n",
"# Define representative dataset generator\n",
"def get_representative_dataset(n_iter, train_loader):\n",
Expand All @@ -289,7 +289,7 @@
"\n",
"# Preform post training quantization \n",
"quant_model, _ = mct.ptq.keras_post_training_quantization_experimental(model,\n",
" get_representative_dataset(n_iters, train_dataset))\n",
" get_representative_dataset(n_iters, representative_dataset))\n",
"\n",
"print('Quantized model is ready')"
]
Expand Down
32 changes: 16 additions & 16 deletions tutorials/notebooks/example_keras_yolov8n.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
"\n",
"In this tutorial, we'll demonstrate the post-training quantization using MCT for a pre-trained object detection model in Keras. Specifically, we'll integrate post-processing, including the non-maximum suppression (NMS) layer, into the model. This integration aligns with the imx500 target platform capabilities.\n",
"\n",
"In this example we will use an existing pre-trained YoloV8-nano model taken from [https://github.com/ultralytics/ultralytics](https://github.com/ultralytics/ultralytics). We will convert the model to a Tensorflow model that includes box decoding and NMS layer. Further, we will quantize the model using MCT post training quantization and evaluate the performance of the floating point model and the quantized model on COCO dataset.\n",
"In this example we will use an existing pre-trained YoloV8-nano model taken from [Ultralytics](https://github.com/ultralytics/ultralytics). We will convert the model to a Tensorflow model that includes box decoding and NMS layer. Further, we will quantize the model using MCT post training quantization and evaluate the performance of the floating point model and the quantized model on COCO dataset.\n",
"\n",
"\n",
"## Summary\n",
Expand Down Expand Up @@ -56,7 +56,7 @@
{
"cell_type": "markdown",
"source": [
" Clone a copy of the [MCT](https://github.com/sony/model_optimization) (Model Compression Toolkit) into your current directory. This step ensures that you have access to the tutorials [resources](https://github.com/sony/model_optimization/tree/main/tutorials/resources) folder which contains all the necessary utility functions for this tutorial"
" Clone a copy of the [MCT](https://github.com/sony/model_optimization) (Model Compression Toolkit) into your current directory. This step ensures that you have access to the tutorials [resources](https://github.com/sony/model_optimization/tree/main/tutorials/resources) folder which contains all the necessary utility functions for this tutorial."
],
"metadata": {
"collapsed": false
Expand Down Expand Up @@ -247,8 +247,7 @@
"\n",
"### Post training quantization using Model Compression Toolkit \n",
"\n",
"Now, we're all set to use MCT's post-training quantization. To begin, we'll define a representative dataset and proceed with the model quantization. Please note that, for demonstration purposes, we'll use the evaluation dataset as our representative dataset. We'll calibrate the model using 100 representative images, divided into 20 iterations of 'batch_size' images each.\n",
"Same as the above section, please ensure that the dataset path has been set correctly."
"Now, we're all set to use MCT's post-training quantization. To begin, we'll define a representative dataset and proceed with the model quantization. Please note that, for demonstration purposes, we'll use the evaluation dataset as our representative dataset. We'll calibrate the model using 100 representative images, divided into 20 iterations of 'batch_size' images each."
]
},
{
Expand All @@ -259,30 +258,31 @@
"outputs": [],
"source": [
"import model_compression_toolkit as mct\n",
"from typing import Iterator, Tuple\n",
"\n",
"TRAIN_DATASET_FOLDER = '/content/coco/val2017/'\n",
"TRAIN_DATASET_ANNOTATION_FILE = '/content/coco/annotations/instances_val2017.json'\n",
"REPRESENTATIVE_DATASET_FOLDER = '/content/coco/val2017/'\n",
"REPRESENTATIVE_DATASET_ANNOTATION_FILE = '/content/coco/annotations/instances_val2017.json'\n",
"n_iters = 20\n",
"\n",
"# Load COCO train set\n",
"train_dataset = coco_dataset_generator(dataset_folder=TRAIN_DATASET_FOLDER,\n",
" annotation_file=TRAIN_DATASET_ANNOTATION_FILE,\n",
" preprocess=yolov8_preprocess,\n",
" batch_size=BATCH_SIZE)\n",
"# Load COCO representative dataset\n",
"representative_dataset = coco_dataset_generator(dataset_folder=REPRESENTATIVE_DATASET_FOLDER,\n",
" annotation_file=REPRESENTATIVE_DATASET_ANNOTATION_FILE,\n",
" preprocess=yolov8_preprocess,\n",
" batch_size=BATCH_SIZE)\n",
"\n",
"# Define representative dataset generator\n",
"def get_representative_dataset(n_iter, train_loader):\n",
"\n",
" def representative_dataset():\n",
" ds_iter = iter(train_loader)\n",
"def get_representative_dataset(n_iter: int, dataset_loader: Iterator[Tuple]):\n",
" \n",
" def representative_dataset() -> Iterator[List]:\n",
" ds_iter = iter(dataset_loader)\n",
" for _ in range(n_iter):\n",
" yield [next(ds_iter)[0]]\n",
"\n",
" return representative_dataset\n",
"\n",
"# Preform post training quantization \n",
"quant_model, _ = mct.ptq.keras_post_training_quantization_experimental(model,\n",
" get_representative_dataset(n_iters, train_dataset))\n",
" get_representative_dataset(n_iters, representative_dataset))\n",
"\n",
"print('Quantized model is ready')"
]
Expand Down
14 changes: 14 additions & 0 deletions tutorials/resources/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Copyright 2023 Sony Semiconductor Israel, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
255 changes: 0 additions & 255 deletions tutorials/resources/coco_evaluation.py

This file was deleted.

14 changes: 14 additions & 0 deletions tutorials/resources/nanodet/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Copyright 2023 Sony Semiconductor Israel, Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Loading

0 comments on commit c8ec6c6

Please sign in to comment.