Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[INFORMATION] mmyolo - Model Surgery using edgeai-modeloptimization - to create lite models #7

Open
mathmanu opened this issue Jul 16, 2024 · 3 comments

Comments

@mathmanu
Copy link
Contributor

mathmanu commented Jul 16, 2024

Introduction

mmyolo (https://github.com/open-mmlab/mmyolo) is a repository that has several interesting Object Detection models. For example, it includes models such as YOLOv5, YOLOv7, YOLOX, YOLOv8 etc.

Here we describe how to apply Model Surgery on mmyolo to create lite models that run faster on Embedded Systems.

Background - What actually happens in Model Surgery

The types of Operators/Layers that are being used in popular models are increasing rapidly. All of them may not work efficiently in embedded devices. For example, a ReLU activation layer is much faster compared to a SWish activation layer - because ReLU operator is implemented in Hardware at fullest speed (because of the simplicity of ReLU operation). This is just an example. There are several such examples.

In many cases it is possible to replace in-efficient layers with their efficient alternatives without actually modifying the code. It is done by modifying the Python model after the model has been instantiated.

How to use edgeai-modeloptimization

edgeai-modeloptimization (https://github.com/TexasInstruments/edgeai-tensorlab/tree/main/edgeai-modeloptimization) is a package that can automate some of the Model Surgery aspects.

It provides edgeai_torchmodelopt, a python pakage that helps to modify PyTorch models without manually editing the model code.

The exact location is here: https://github.com/TexasInstruments/edgeai-tensorlab/tree/main/edgeai-modeloptimization/torchmodelopt

It provides various types of model surgery options as described here:
https://github.com/TexasInstruments/edgeai-tensorlab/blob/main/edgeai-modeloptimization/torchmodelopt/docs/surgery.md

Patch file

The commit id of mmyolo (https://github.com/open-mmlab/mmyolo) for this explanation is: 8c4d9dc503dc8e327bec8147e8dc97124052f693

This patch file includes above modification in train.py and other modifications in val.py, prototxt export etc.
0001-2024-Aug-2-mmyolo.commit-8c4d9dc5.-model-surgery-with-edgeai-modeloptimization.txt

Patching mmyolo:

git clone https://github.com/open-mmlab/mmyolo.git
git checkout 8c4d9dc5
git am 0001-mmyolo.commit-8c4d9dc5.-model-surgery-with-edgeai-modeloptimization.txt

Run training:

python3 tools/train.py <configfile> --model-surgery 1

You can also use tools/dist_train.sh
(just make sure that --model-surgery 1 argument is passed inside it)

Expected Accuracy

This table shows expected model accuracy of Lite models after training.

Dataset Original Model Lite Model Input Size Original AP[0.5:0.95]%, AP50% Lite AP[0.5:0.95]%, AP50% GigaMACS config file Notes
YOLOv5 models
COCO YOLOv5-nano YOLOv5-nano-lite 640x640 28.0, 45.9 25.2, 42.1 2.07 configs/yolov5/yolov5_n-v61_syncbn_fast_8xb16-300e_coco.py
COCO YOLOv5-small YOLOv5-small-lite 640x640 37.7, 57.1 35.5, 54.7 7.89 configs/yolov5/yolov5_s-v61_syncbn_fast_8xb16-300e_coco.py
YOLOv7 models
COCO YOLOv7-tiny YOLOv7-tiny-lite 640x640 37.5, 55.8 36.7, 55.0 6.87 configs/yolov7/yolov7_tiny_syncbn_fast_8x16b-300e_coco.py
COCO YOLOv7-large YOLOv7-large-lite 640x640 51.0, 69.0 48.1, 66.4 52.95 configs/yolov7/yolov7_l_syncbn_fast_8x16b-300e_coco.py
YOLOv8 models
COCO YOLOv8-nano YOLOv8-nano-lite 640x640 37.2, 52.7 34.5, 49.7 - configs/yolov8/yolov8_n_syncbn_fast_8xb16-500e_coco.py
COCO YOLOv8-small YOLOv8-small-lite 640x640 44.2, 61.0 42.4, 58.8 14.33 configs/yolov8/yolov8_s_syncbn_fast_8xb16-500e_coco.py
YOLOX models
COCO YOLOX-tiny YOLOX-tiny-lite 416x416 32.7, 50.3 31.1, 48.4 3.25 configs/yolox/yolox_tiny_fast_8xb8-300e_coco.py
COCO YOLOX-small YOLOX-small-lite 640x640 40.7, 59.6 38.7, 57.4 7.85 configs/yolox/yolox_s_fast_8xb8-300e_coco.py

Notes

  • GigaMACS: Complexity in Giga Multiply-Accumulations required for inference (lower is better). This is an important metric to watch out for when selecting models for embedded inference.
  • Accuracy for Object Detection on COCO dataset primarily uses two accuracy metrics AP[0.5:0.95] and AP50 (in percentages). AP[0.5:0.95] is the Mean of Average Precision values computed at IOUs ranging from 0.5 to 0.95 and averaged. AP50 is the Average Precision computed at 0.5 IoU. If only one accuracy metric is mentioned in a table cell, then it is AP[0.5:0.95]. Be sure to compare using the same metric when comparing across various detectors or configurations.
  • Input size in the tables (width x height) indicates the resolution for the model input. Original input images can be resized to that resolution with preserving the aspect ratio (may need padding) or without preserving the aspect ratio (depending on the flag keep_ratio in config files).

Additional information

Additional information about the details of the modifications done using Model Surgery is here: https://github.com/TexasInstruments/edgeai-yolov5

@mathmanu mathmanu added documentation Improvements or additions to documentation information and removed documentation Improvements or additions to documentation labels Jul 16, 2024
@mathmanu mathmanu changed the title [INFORMATION] edgeai-modeloptimization - Model Surgery on https://github.com/open-mmlab/mmyolo [INFORMATION] edgeai-modeloptimization - Model Surgery on mmyolo Jul 16, 2024
@mathmanu mathmanu pinned this issue Jul 16, 2024
@mathmanu mathmanu changed the title [INFORMATION] edgeai-modeloptimization - Model Surgery on mmyolo [INFORMATION] edgeai-modeloptimization - Model Surgery on mmyolo to create lite models Jul 16, 2024
@mathmanu
Copy link
Contributor Author

mathmanu commented Jul 17, 2024

How Model Surgery is actually done

This is for information only - the above patch already includes these changes.

The patch adds the following code in mmyolo repository in tools/train.py. similarly tools/test.py is also modified to include model surgery.

from edgeai_torchmodelopt import xmodelopt

Add this in parse_args function:

    parser.add_argument('--model-surgery', type=int, default=0)

Add the following code in mmyolo repository in tools/train.py before the line runner.train()

if args.model_surgery:
    surgery_fn = xmodelopt.surgery.v1.convert_to_lite_model if args.model_surgery == 1 \
                 else (xmodelopt.surgery.v2.convert_to_lite_fx if args.model_surgery == 2 else None)
    
    runner._init_model_weights()
    if is_model_wrapper(runner.model):
        runner.model = runner.model.module
    runner.model.backbone = surgery_fn(runner.model.backbone)
    runner.model.neck = surgery_fn(runner.model.neck)
    # Only head_module of head goes through model_surgery as it contains all compute layers
    if not isinstance(runner.model.bbox_head.head_module, (YOLOv5HeadModule, YOLOv7HeadModule, YOLOv8HeadModule, YOLOv6HeadModule)):
        if hasattr(runner.model.bbox_head.head_module, 'reg_max'):
            reg_max = runner.model.bbox_head.head_module.reg_max
        else:
            reg_max = None
        runner.model.bbox_head.head_module = \
            surgery_fn(runner.model.bbox_head.head_module)
        if reg_max is not None:
            runner.model.bbox_head.head_module.reg_max = reg_max
    elif isinstance(runner.model.bbox_head.head_module, (YOLOv8HeadModule, YOLOv6HeadModule)):
        runner.model.bbox_head.head_module = xmodelopt.surgery.v1.convert_to_lite_model(runner.model.bbox_head.head_module)
    runner.model = runner.wrap_model(runner.cfg.get('model_wrapper_cfg'), runner.model)
print("\n\n model summary : \n",runner.model)

@mathmanu mathmanu changed the title [INFORMATION] edgeai-modeloptimization - Model Surgery on mmyolo to create lite models [INFORMATION] mmyolo - Model Surgery usimg edgeai-modeloptimization - to create lite models Aug 2, 2024
@mathmanu mathmanu changed the title [INFORMATION] mmyolo - Model Surgery usimg edgeai-modeloptimization - to create lite models [INFORMATION] mmyolo - Model Surgery using edgeai-modeloptimization - to create lite models Aug 2, 2024
@tsolimanrb
Copy link

Currently i'm facing a major issue integrating these optimization.

from torch.fx.graph import _parse_stack_trace

ImportError: cannot import name '_parse_stack_trace' from 'torch.fx.graph

when i try to run the model after integrating the optimization i run into this issue. which when i tracked require torch with version 2.2 at least. Nevertheless mmyolo 0.6 which is the latest version require an earlier version same with cuda.

Hope you can guide me the environment you are currently using cause i tried several updates but reached a dead end.

@rekib23r
Copy link

rekib23r commented Dec 10, 2024

The mmyolo repository requires torch version less than 2.1, because it depends on a particular version of mmcv

You can try following these steps after applying the above patch :

  1. cd mmyolo
  2. pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
  3. ./setup.sh
  4. mim install -v -e .
  5. pip install albumentations==1.3.1
  6. pip install numpy==1.26.4

*python version is 3.10

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants