Skip to content
This repository has been archived by the owner on Oct 9, 2023. It is now read-only.

Enable input normalization in SemanticSegmentationData module #1398

Closed
Nico995 opened this issue Jul 19, 2022 · 2 comments · Fixed by #1399
Closed

Enable input normalization in SemanticSegmentationData module #1398

Nico995 opened this issue Jul 19, 2022 · 2 comments · Fixed by #1399
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@Nico995
Copy link
Contributor

Nico995 commented Jul 19, 2022

🚀 Feature

Add the possibility to normalize Input images in SemanticSegmentationData module

Motivation

Enable effortless normalization, as already implemented by ImageClassificationData: optionally configurable by doing:

dm = SemanticSegmentationData.from_folders(
    # ...
    args_transforms=dict(mean=mean,std=std)
)

Pitch

Change /flash/image/segmentation/input_transform.py:43

@dataclass
class SemanticSegmentationInputTransform(InputTransform):

    image_size: Tuple[int, int] = (128, 128)

    def train_per_sample_transform(self) -> Callable:
        return ApplyToKeys(
            [DataKeys.INPUT, DataKeys.TARGET],
            KorniaParallelTransforms(
                K.geometry.Resize(self.image_size, interpolation="nearest"), K.augmentation.RandomHorizontalFlip(p=0.5)
            ),
        )

    def per_sample_transform(self) -> Callable:
        return ApplyToKeys(
            [DataKeys.INPUT, DataKeys.TARGET],
            KorniaParallelTransforms(K.geometry.Resize(self.image_size, interpolation="nearest")),
        )

    def predict_per_sample_transform(self) -> Callable:
        return ApplyToKeys(DataKeys.INPUT, K.geometry.Resize(self.image_size, interpolation="nearest"))

into this

@dataclass
class SemanticSegmentationInputTransform(InputTransform):

    image_size: Tuple[int, int] = (128, 128)
    mean: Union[float, Tuple[float, float, float]] = (0.485, 0.456, 0.406)
    std: Union[float, Tuple[float, float, float]] = (0.229, 0.224, 0.225)


    def train_per_sample_transform(self) -> Callable:
        return T.Compose(
            [
                ApplyToKeys(
                    [DataKeys.INPUT, DataKeys.TARGET],
                    KorniaParallelTransforms(
                        K.geometry.Resize(self.image_size, interpolation="nearest"),
                    )
                ),
                ApplyToKeys(
                    [DataKeys.INPUT],
                    K.augmentation.Normalize(mean=mean, std=std)
                    
                ),
            ]
        )

    def per_sample_transform(self) -> Callable:
        return T.Compose(
            [
                ApplyToKeys(
                    [DataKeys.INPUT, DataKeys.TARGET],
                    KorniaParallelTransforms(
                        K.geometry.Resize(self.image_size, interpolation="nearest"),
                    )
                ),
                ApplyToKeys(
                    [DataKeys.INPUT],
                    K.augmentation.Normalize(mean=mean, std=std)
                    
                ),
            ]
        )

    def predict_per_sample_transform(self) -> Callable: 
        return ApplyToKeys(
                    DataKeys.INPUT, 
                    K.geometry.Resize(self.image_size, interpolation="nearest"), 
                    K.augmentation.Normalize(mean=mean, std=std)
                )

Alternatives

The alternative is to write a custom InputTransform object every time.

@Nico995 Nico995 added enhancement New feature or request help wanted Extra attention is needed labels Jul 19, 2022
@ethanwharris
Copy link
Collaborator

Hey @Nico995 Thanks for the suggestion! Your changes look great, would you be willing to open a PR for this?

@Nico995
Copy link
Contributor Author

Nico995 commented Jul 19, 2022

#1399
Done :)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants