Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial mmsegmentation support #934

Merged
merged 136 commits into from
Mar 30, 2022

Conversation

nicjac
Copy link
Contributor

@nicjac nicjac commented Sep 25, 2021

Initial mmsegmentation support. This is a work-in-progress.

Will populate this PR as we go!

  • Integration can be used end-to-end with DeepLabV3 and SegFormer
  • W&B integration for semantic segmentation
  • New checkpoint export/import support (this one might have to wait as feature was added after the fork)
  • Custom CFG like mmdet implementation (might already be done, but should be tested)
  • Properly implement "hack" to avoid Pytorch doing parallel training by default in utils.py
  • Interpretation loop (currently using mmdet's)
  • Updated semantic segmentation notebook to showcase mmsegmentation models

To figure out:

  • Splitter / Trainable Params / param_groups
  • Could we use fastai's Dice and other metrics (if fastai is a hard dependency)? Will keep current implementation

To add in an upcoming PR:

  • Lightning integration
  • CPU support
  • Tests for mmsegmentation models

nicjac added 19 commits July 14, 2021 20:39
Fixed PIL size bug in ImageRecordComponent (airctic#889)
This is currently very crude, mostly copy/pasting of the mmdet integration with a few changes.
- Fixed issue with number of classes
- Fixed dataloader issue (mask tensor size)
- This implementation was created based on the existing mmdet integration, and some remaining, unused folders, were removed
@lgvaz
Copy link
Collaborator

lgvaz commented Sep 27, 2021

Heeey nicjac! This is a big and a very very helpful PR, we'll try to assist you as much as we can, feel free to ask for help at any point =)

from icevision.models.mmseg.models.deeplabv3.backbones import *
from icevision.models.mmseg.models import *
from icevision.models.interpretation import get_samples_losses

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It'd be good to have more tests similar to the ones we currently support for mmdet here.
Super nice to have tests for metrics, show_results and plot_top_losses but dataloaders, and model training should be checked too.
We are adding support for a completely new library and we need to make sure we don't break it in the future when we change something.

@ai-fast-track thoughts?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree! I kind of ran out of time to implement more tests. Happy to do some (either for this PR, or a future refinement pass).

Copy link
Contributor

@FraPochetti FraPochetti Feb 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd make me more comfortable with a couple more added to this PR.
Doesn't need to be anything great.
You can totally copy the ones we already have for mmdet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@FraPochetti @nicjac I agree we need to have at least some basic tests to make sure nothing breaks. We can add more tests in subsequent PRs.

_seg_masks_gt = None
_seg_masks_pred = None

def __init__(self, ignore_class=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why don't we set this to Void as default?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is an arbitrary choice on my part - I thought the expected default behaviour would be not to ignore any class, though I can see why Void would make sense here. Happy either way.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not familiar with and never used Void. In context like this I also usually set var to None

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@lgvaz what are your 2 cents here?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My intuition was right, take a look at this example:
image

Copy link
Contributor

@FraPochetti FraPochetti Feb 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait, I am referring to ignore_class which is set as an instance attribute, not a class one.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, sorry I totally misunderstood the issue then.

Copy link
Contributor

@FraPochetti FraPochetti Feb 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Np man.
When I asked

Why don't we set this to Void as default?

I meant

Why don't we set ignore_class to Void as default?

Copy link
Contributor

@FraPochetti FraPochetti Feb 23, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the discussion guys.
Thinking about it more thoroughly, I am starting to convince myself that it could be a good idea to set ignore_class=None as it's done now. I am not sure but I think that Void is CamVid specific, so it makes no sense to set it as default.

@FraPochetti
Copy link
Contributor

Out of curiosity, how do you handle the end2end inference flow with masks (e.g. return the mask on the original image, not the transformed one)?
The current implementation of end2end_detect (here) supports only bboxes.

def __init__(self, data: np.uint8):
if len(data.shape) == 2:
def __init__(self, data: np.uint8, pad_dim: bool = True):
if pad_dim and (len(data.shape) == 2):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is pad_dim for?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change introduced the option not to pad the mask dimensions. Before this change, the mask array always ended up being padded.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

During the transforms?

@@ -39,10 +39,21 @@ def convert_raw_predictions(
keep_images: bool = False,
) -> List[Prediction]:
mask_preds = raw_preds.argmax(dim=1)
tensor_images, *_ = batch
Copy link
Contributor

@FraPochetti FraPochetti Feb 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as the previous file icevision/models/fastai/unet/fastai/callbacks.py.
UNet is outside of mmseg.
Did anything break because of the changes to MaskArray, hence you were obliged to edit those files?

icevision_install.sh Outdated Show resolved Hide resolved
@nicjac
Copy link
Contributor Author

nicjac commented Feb 20, 2022

Out of curiosity, how do you handle the end2end inference flow with masks (e.g. return the mask on the original image, not the transformed one)? The current implementation of end2end_detect (here) supports only bboxes.

Unfortunately, it does not just yet. I have a very hacky implementation for prototyping purpose but a proper implementation will be part of a refinement PR. It shouldn't take too long as we do need this for our work as well!

@FraPochetti
Copy link
Contributor

Out of curiosity, how do you handle the end2end inference flow with masks (e.g. return the mask on the original image, not the transformed one)? The current implementation of end2end_detect (here) supports only bboxes.

Unfortunately, it does not just yet. I have a very hacky implementation for prototyping purpose but a proper implementation will be part of a refinement PR. It shouldn't take too long as we do need this for our work as well!

I am very interested in this one, so I can definitely take care of it too.

@nicjac
Copy link
Contributor Author

nicjac commented Mar 28, 2022

I believe everything is sorted 🤞

@FraPochetti FraPochetti merged commit 0d4d689 into airctic:master Mar 30, 2022
brownaa added a commit to brownaa/icevision that referenced this pull request May 22, 2022
* Add error message for easy data fix (airctic#1074)

* add error message for easy data fix

* formatting

* fix black

* Mmseg initial support (airctic#1079)

* Preliminary work on mmsegmentation integration

This is currently very crude, mostly copy/pasting of the mmdet integration with a few changes.

* Updated mmseg dataloaders based on unet implementation

* Training now kind of working!

- Fixed issue with number of classes
- Fixed dataloader issue (mask tensor size)

* Improved predictions handling

* Fixed prediction, general code clean-up

* Simplification of folder structure

* Getting closer to final structure

* Finished first structure update

- This implementation was created based on the existing mmdet integration, and some remaining, unused folders, were removed

* Attempt to fix show_batch for mmseg

* Added binary and multi-class dice coefficient metrics

* Removed test-only code in binary Dice coefficient method

* Initial support for multiple pre-trained variants

* Reformatting using black

* First version of DeepLabV3plus support

* Exceptions to elegantly handled unknown pre-trained variants

* Added all DeepLabV3 pretrained variants

* Ground truth masks saved with predictions if keep_images set to True

* Added all DeepLabV3Plus pre-training variants

* Added proper support for param_groups

* Removed erroneous pre-trained variants for DeepLabV3Plus

* Fixed erroneous DeepLabV3 pre-trained variants

* re-added default values to DeepLabV3 pretrained variants

* Improved model loading and initialization

* Improved how distributed BNs are handled

* - Proper integration of loop_mmseg
- First test!

* Updated tests

* __init__.py file

* jaccard index metric

* update init file to include multilabel dice coef

* updates to logical statements

* update to handle denominator equal to 0

* update to handle denominator equal to 0

* removed repeated code

* removed repeated code

* Getting started with seg notebook

* Formatting fix (black)

* Removed temporary file

* Added mmsegmentation to Dockerfile

* Added mmseg installation to workflows

* Added artifacts to gitignore (wandb)

* Testing mim-based install in actions

* Fixing mim-based install

* Pinning mmcv version in mim installs

* Bumping mmcv-full to 1.3.13

* Improved CPU support

* Reverted workflow files to match current master

* Improved tests on CPU devices

* Better handling of the device mess?

* Attempt to remove hacky device code

* Added mmseg to soft dependency test

* changed to binary jaccard index

* delete jaccard_index

* added jaccard index

* up to date metric test files

* Fixed init for binary jaccard index

* Argument to exclude classes from multiclass dice coefficient metric

* Added background exclusion case for multilabel dice coefficient tests

* adjusted setup, need to verify values

* added comment for correct values

* updated cases and setup

* updated cases and setup

* added class map to dictionary for each case

* Adapted RLE mask saving code, might need to revisit later

* Added support for wandb logging of semantic seg masks

* Added option not to pad dimensions when creating a MaskArray object

* Fixed typo

* Resampling of masks should use nearest neighbour

* Added TODO to masks.py

* Fixed loss keys for mssg

* mmseg installation in icevision_install.sh

* Fixed typo in install script

* Fixed albumentation for polygon mask

* Black formatting

* Fix handlig of special case of Polygon masks

* Adding after_pred convert raw pred callback to unet fastai learner

* Fixed unet prediction when GT not available

* More fixes to unet prediction callback

* Fixing predict batch for mmseg models

* Fixed predictions conversion for mmseg

* Improved unet prediction conversion

* segformer support

* Updated mmseg model creation function

* Misc comment

* Added options to install script to assist with testing (temporary)

* Black formatting

* Remove install colab script added by mistake

* Actual updated install script

* Started work on README

* Updated README somewhat

* Fix draw data for mask files

* Implementation of mask resizing for MaskFile objects

* Support for MaskFile obnects in draw_data

* Fixed mask display in show_batch for mask-enabled mmdet models

* import typo

* Adding warning if re-creating masks from file in draw_data

* Added warning if maskfile loaded from disc for display

* README typo fixed

* Removed temporary mmsegmentation installation option

* Reverted changes to plot_top_losses

* solving merging conflicts

* fix error in CI

* adding back segm nbs

* merging notebooks

* fix indentation in build-pkg

* adding fastai and pl train tests

Co-authored-by: Nicolas Jaccard <[email protected]>
Co-authored-by: Gabriella Lanouette <[email protected]>
Co-authored-by: Nicolas Jaccard <[email protected]>

* Revert "Mmseg initial support (airctic#1079)" (airctic#1082)

This reverts commit 20c3ff5.

* Initial mmsegmentation support (airctic#934)

* Fixed PIL size bug in ImageRecordComponent (airctic#889)

* Preliminary work on mmsegmentation integration

This is currently very crude, mostly copy/pasting of the mmdet integration with a few changes.

* Updated mmseg dataloaders based on unet implementation

* Training now kind of working!

- Fixed issue with number of classes
- Fixed dataloader issue (mask tensor size)

* Improved predictions handling

* Fixed prediction, general code clean-up

* Simplification of folder structure

* Getting closer to final structure

* Finished first structure update

- This implementation was created based on the existing mmdet integration, and some remaining, unused folders, were removed

* Attempt to fix show_batch for mmseg

* Added binary and multi-class dice coefficient metrics

* Removed test-only code in binary Dice coefficient method

* Initial support for multiple pre-trained variants

* Reformatting using black

* First version of DeepLabV3plus support

* Exceptions to elegantly handled unknown pre-trained variants

* Added all DeepLabV3 pretrained variants

* Ground truth masks saved with predictions if keep_images set to True

* Added all DeepLabV3Plus pre-training variants

* Added proper support for param_groups

* Removed erroneous pre-trained variants for DeepLabV3Plus

* Fixed erroneous DeepLabV3 pre-trained variants

* re-added default values to DeepLabV3 pretrained variants

* Improved model loading and initialization

* Improved how distributed BNs are handled

* - Proper integration of loop_mmseg
- First test!

* Updated tests

* __init__.py file

* jaccard index metric

* update init file to include multilabel dice coef

* updates to logical statements

* update to handle denominator equal to 0

* update to handle denominator equal to 0

* removed repeated code

* removed repeated code

* Getting started with seg notebook

* Formatting fix (black)

* Removed temporary file

* Added mmsegmentation to Dockerfile

* Added mmseg installation to workflows

* Updating mmcv to 1.3.7 for mmseg support

* Added artifacts to gitignore (wandb)

* Testing mim-based install in actions

* Fixing mim-based install

* Pinning mmcv version in mim installs

* Bumping mmcv-full to 1.3.13

* Improved CPU support

* Reverted workflow files to match current master

* Improved tests on CPU devices

* Update docs action to match new dependencies versions

* Better handling of the device mess?

* Attempt to remove hacky device code

* Added mmseg to soft dependency test

* changed to binary jaccard index

* delete jaccard_index

* added jaccard index

* up to date metric test files

* Fixed init for binary jaccard index

* Argument to exclude classes from multiclass dice coefficient metric

* Added background exclusion case for multilabel dice coefficient tests

* adjusted setup, need to verify values

* added comment for correct values

* updated cases and setup

* updated cases and setup

* added class map to dictionary for each case

* Adapted RLE mask saving code, might need to revisit later

* Added support for wandb logging of semantic seg masks

* Added option not to pad dimensions when creating a MaskArray object

* Fixed typo

* Resampling of masks should use nearest neighbour

* Added TODO to masks.py

* Fixed loss keys for mssg

* mmseg installation in icevision_install.sh

* Fixed typo in install script

* Fixed albumentation for polygon mask

* Black formatting

* Fix handlig of special case of Polygon masks

* Started updating getting started notebook

* More updates to the notebook

* Adding after_pred convert raw pred callback to unet fastai learner

* Fixed unet prediction when GT not available

* More fixes to unet prediction callback

* Fixing predict batch for mmseg models

* Fixed predictions conversion for mmseg

* Improved unet prediction conversion

* Black formatting

* segformer support

* Updated mmseg model creation function

* Misc comment

* Added options to install script to assist with testing (temporary)

* Black formatting

* Remove install colab script added by mistake

* Actual updated install script

* Updated semantic seg notebook

* Removed legacy cell from notebook

* Further updated notebook

* Updated "open in colab" icon URL

* Added cell to restart kernel after installation

* Reverted notebook changes for merge into master

* Started work on README

* Updated README somewhat

* Updated semantic seg notebook to match updated version on master

* Fix draw data for mask files

* Implementation of mask resizing for MaskFile objects

* Support for MaskFile obnects in draw_data

* Fixed mask display in show_batch for mask-enabled mmdet models

* import typo

* Adding warning if re-creating masks from file in draw_data

* Added warning if maskfile loaded from disc for display

* README typo fixed

* Removed temporary mmsegmentation installation option

* Reverted changes to plot_top_losses

* Disabling mmseg PL implementation for now (pending update)

* Re-added unet callback

* Fixed CI

* Actually disabled PL implementation

* Removing test file uploaded by mistake

* mmseg fastai training test

* Fixed BackboneConfig location

* Fixed formatting

* Use latest convert_raw_predictions for Unet

* Fixed Unet PL test with new raw prediction conversion method

* Removed lightning mmseg files pending proper implementation

Co-authored-by: Gabriella Lanouette <[email protected]>

* don't open image on get_img_size (airctic#1084)

* fixes airctic#1090; Fix typo lightning -> lighting in Albumentations helper (airctic#1091)

* Fix coco mask vstack error (airctic#1095)

* Fix coco mask vstack error

* Give empty mask for coco mask prediction

* Reformat code

Co-authored-by: Joowon <[email protected]>
Co-authored-by: Francesco Pochetti <[email protected]>
Co-authored-by: Nicolas Jaccard <[email protected]>
Co-authored-by: Gabriella Lanouette <[email protected]>
Co-authored-by: Nicolas Jaccard <[email protected]>
Co-authored-by: Lucas Vazquez <[email protected]>
Co-authored-by: Behnam Haddadian <[email protected]>
Co-authored-by: aisensiy <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants