Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactoring of ImageProcessorFast #35069

Merged

Conversation

yonigozlan
Copy link
Member

@yonigozlan yonigozlan commented Dec 4, 2024

What does this PR do?

This PR introduces a significant refactoring of how fast image processors can be implemented in Transformers.

Motivation

The primary goal of this refactoring is to simplify the process of creating fast image processors when a slow image processor already exists.

Unlike the current BaseImageProcessor, which provides only a minimal skeleton for image processor classes, the newly introduced BaseImageProcessorFast includes all the basic functionalities needed by a typical image processor. This design allows contributors to focus primarily on modifying default parameters, such as the image mean, std, or default resizing dimensions, rather than rewriting foundational code.

Key Advantages:

  • Ease of Contribution: Contributors no longer need to rely on copy-paste from an arbitrary slow image processor (which I feel is what is happening currently for some slow image processors). Instead, the BaseImageProcessorFast provides a natural starting point with predefined functionalities.
  • Consistency: Contributors are encouraged to use a common structure. Whether they only modify default parameters, leverage mixins, or add custom code, they are likely to follow a consistent syntax and logic.
  • Automatic Optimizations: Improvements made to BaseImageProcessorFast are automatically propagated to all derived fast image processors.
  • Reduced Diffs: The new approach minimizes added diffs compared to the existing "# Copied from" philosophy in slow image processors. While the "repeat yourself" philosophy is an important part of modeling in Transformers, I feel that it might not be as necessary for image processing, as the model's uniqueness is rarely found in the image processing logic.

Implementation

Functional or class transforms

Following the torchvision approach to defining image transforms, there are two main ways to write the processing logic for image processors: using functional transforms or class-based transforms.

This PR showcases both approaches:

The choice is entirely open for debate at this point. I see advantages to both approaches, but I’m sure I haven’t considered everything, so please share your thoughts if you have a preference one way or the other.

To me, the advantages/drawbacks of functionals are the following:

  • 🟢 Less abstraction so potentially easier to read
  • 🟢 Easier for contributors to write or adapt from existing slow image processors (which currently use functionals).
  • 🟢 Allows more flexibility in processing logic, as transforms do not need to be sequential. For more complex processors, using piped class-based transforms would likely require mixing functionals and class-based transforms, or adding logic outside the transforms pipeline instead of a simple one-liner.
  • 🔴 The logic can be more verbose than for class transforms

For class transforms:

  • 🟢 Aligns with practices in other libraries like Albumentations.
  • 🟢 Generally cleaner and more structured, easier to add/remove simple transforms.
  • 🔴 As mentioned before, the logic is restricted by the sequential nature of the pipeline. For complex processors (e.g., involving patching), mixing functionals and class-based transforms or adding logic around the pipeline seems unavoidable (as seen in LlavaOnevisionImageProcessorFast).
  • 🔴 There appear to be compilation issues: LlavaOnevisionImageProcessorFast fails to compile, as it seemingly gets stuck in an infinite compilation loop, while its functional equivalent, LlavaNextImageProcessorFast, compiles without problems. Hover this needs more thorough investigation.

New add-fast-image-processor Command in transformers-cli

This PR introduces a new CLI command that automates the creation of fast image processors. Given a model folder name, it generates all necessary imports, documentation, dummy objects, and a fast image processor file where default parameters are parsed from the slow image processor file.
The new fast image processor class is also added to the image processor test file by the CLI. However, some tests may still need manual adjustments to properly include and validate the new fast image processor class.

Example Usage:

transformers-cli add-fast-image-processor --model-name blip

Example Output:

In transformers/src/models/blip/image_processing_blip_fast.py:

# coding=utf-8
# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Fast Image processor class for BLIP."""

from ...image_processing_utils_fast import BaseImageProcessorFast
from ...image_utils import OPENAI_CLIP_MEAN, OPENAI_CLIP_STD, PILImageResampling


class BlipImageProcessorFast(BaseImageProcessorFast):
    r"""
    Constructs a fast BLIP image processor.

    Args:
        do_resize (`bool`, *optional*):
            Whether to resize the image's (height, width) dimensions to the specified `size`. Can be overridden by the
            `do_resize` parameter in the `preprocess` method.
        size (`dict`, *optional*):
            Size of the output image after resizing. Can be overridden by the `size` parameter in the `preprocess`
            method.
        resample (`PILImageResampling`, *optional*):
            Resampling filter to use if resizing the image. Only has an effect if `do_resize` is set to `True`. Can be
            overridden by the `resample` parameter in the `preprocess` method.
        do_center_crop (`bool`, *optional*, defaults to `True`):
            Whether to center crop the image to the specified `crop_size`. Can be overridden by `do_center_crop` in the
            `preprocess` method.
        crop_size (`Dict[str, int]` *optional*, defaults to 224):
            Size of the output image after applying `center_crop`. Can be overridden by `crop_size` in the `preprocess`
            method.
        do_rescale (`bool`, *optional*):
            Whether to rescale the image by the specified scale `rescale_factor`. Can be overridden by the
            `do_rescale` parameter in the `preprocess` method.
        rescale_factor (`int` or `float`, *optional*, defaults to `1/255`):
            Scale factor to use if rescaling the image. Only has an effect if `do_rescale` is set to `True`. Can be
            overridden by the `rescale_factor` parameter in the `preprocess` method.
        do_normalize (`bool`, *optional*):
            Whether to normalize the image. Can be overridden by the `do_normalize` parameter in the `preprocess`
            method. Can be overridden by the `do_normalize` parameter in the `preprocess` method.
        image_mean (`float` or `List[float]`, *optional*):
            Mean to use if normalizing the image. This is a float or list of floats the length of the number of
            channels in the image. Can be overridden by the `image_mean` parameter in the `preprocess` method. Can be
            overridden by the `image_mean` parameter in the `preprocess` method.
        image_std (`float` or `List[float]`, *optional*):
            Standard deviation to use if normalizing the image. This is a float or list of floats the length of the
            number of channels in the image. Can be overridden by the `image_std` parameter in the `preprocess` method.
            Can be overridden by the `image_std` parameter in the `preprocess` method.
        do_convert_rgb (`bool`, *optional*):
            Whether to convert the image to RGB.
    """


    # This generated class can be used as a starting point for the fast image processor.
    # if the image processor is only used for simple augmentations, such as resizing, center cropping, rescaling, or normalizing,
    # only the default values should be set in the class.
    # If the image processor requires more complex augmentations, methods from BaseImageProcessorFast can be overridden.
    # For an example of a fast image processor requiring more complex augmentations, see `LlavaOnevisionImageProcessorFast`.

    # Default values should be checked against the slow image processor
    # None values left after checking can be removed
    resample = PILImageResampling.BICUBIC
    image_mean = OPENAI_CLIP_MEAN
    image_std = OPENAI_CLIP_STD
    size = {"height": 384, "width": 384}
    default_to_square = None
    crop_size = None
    do_resize = True
    do_center_crop = None
    do_rescale = True
    do_normalize = True
    do_convert_rgb = True

In this case, this is enough to get a fully working fast Blip image processor!

New Mixins for Common Logic

To handle shared preprocessing and post-processing logic, this PR introduces reusable mixins (only LlavaPatchingMixin is present as an example in this PR). Additional mixins are planned for other common patterns, such as:

  • Video processing
  • DETR-like processing
  • Segmentation post-processing
  • Depth estimation post-processing

Edit: Removing the Mixins for patching in favor of # Copied from or modular in the futur, as such preprocessing techniques don't usually stick for a long time, and adding Mixins every time a technique is used twice or more wouldn't scale well.

Summary: Three Types of Fast Image Processors

  1. Basic Processors:
    • These support standard operations like resizing, rescaling, normalizing, and cropping.
    • They require minimal customization, mostly overriding default parameters.
    • Examples in this PR: blip, clip, deit, siglip, and vit.

2. Mixin-Based Processors:
- These rely heavily on predefined mixins to implement shared logic.
- Examples in this PR: llava_next and llava_onevision.

Edit: Removing Mixins in favor of # Copied From or modular for now, see earlier comment.

  1. Exotic Processors:
    • These have unique processing logic that differs significantly from the base class or existing mixins.
    • Contributors need to override functions like __init__ and preprocess while reusing the syntax and structure of BaseImageProcessorFast.
    • Example in this PR: convnext

Miscellaneous Issues and Questions

  • The CLI currently needs an existing slow image processor to work. If the library aims to eventually fully deprecate slow image processors or at least for new models, this will need to change, and the CI should maybe be integrated into the add-new-model-like.
  • There is a significant design difference between slow and fast image processors. For slow processors, contributors need to rewrite most of the logic, while for fast processors, the goal is to rewrite as little code as possible. This might get confusing for contributors, especially since as mentioned in the previous point, users will most likely start with writing a slow image processor.
  • Adding lots of mixins to image_processing_utils_fast might make the file large and difficult to read. This also raises the question of when a mixin should be created for repeated patterns across image processors. Should it be created when a pattern is shared by two or more models? Or when it presents an idea likely to be reused in the future?
  • Padding functionality is currently not included in BaseImageProcessorFast because the image processors that use padding implement it in slightly different ways. This makes it challenging to standardize the padding logic. A more consistent approach should maybe be added in the future.
  • The center_crop functions in Transformers' image_transforms and in torchvision have different implementations, namely the cropping boundaries in transformers are defined as: top = (orig_height - crop_height) // 2, left = (orig_width - crop_width) // 2 and top = int(round((orig_height - crop_height) / 2.0)), left = int(round((orig_width - crop_width) / 2.0)), which can result in shifted cropping when the size before cropping is odd. I don't think this should be much of a problem for users, but when comparing outputs of slow and fast image processors in tests, this results in huge differences.

Who can review?

@qubvel @molbap @zucchini-nlp

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@yonigozlan yonigozlan force-pushed the improve-fast-image-processor-base branch from 9e31e43 to c393647 Compare December 7, 2024 00:18
@yonigozlan
Copy link
Member Author

@ArthurZucker This PR is still in a rough state but pinging you for visibility as there are lots of refactoring choices that are up for discussion!

Copy link
Member

@qubvel qubvel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I made an initial quick pass. My main concern here is how common these methods are. I mean, for preprocess, does it scale to most models, or will we need to override it somewhere?

I think (but might be wrong) that the better design choice would be to follow standards in image processing, where we define each transformation as a separate class and then can pipe them. This approach is followed by torchvision, albumentations, kornia, and imgaug. So I suppose this is a pretty robust approach that should work well in our case too.

We can reuse the original torch transforms if they are compatible with torch compile, otherwise, we can rewrite them ourselves. The most common transforms will live in the base file, while custom will live in image_processing_* model's file and can be moved to common for reuse.

Let me know what you think regarding this? Do you see any difficulties using this approach?

src/transformers/image_processing_utils_fast.py Outdated Show resolved Hide resolved
src/transformers/image_processing_utils_fast.py Outdated Show resolved Hide resolved
@yonigozlan
Copy link
Member Author

yonigozlan commented Dec 11, 2024

I think (but might be wrong) that the better design choice would be to follow standards in image processing, where we define each transformation as a separate class and then can pipe them. This approach is followed by torchvision, albumentations, kornia, and imgaug. So I suppose this is a pretty robust approach that should work well in our case too.

We can reuse the original torch transforms if they are compatible with torch compile, otherwise, we can rewrite them ourselves. The most common transforms will live in the base file, while custom will live in image_processing_* model's file and can be moved to common for reuse.

@qubvel The problem I see with this is that some image processors needs additional logic in-between transforms, that depends on the result of the previous transforms (like the padding operation in detr-like image processors), so unless I'm mistaken I don't think piping class transforms would be possible here.
It feels to me that having some processors using piping of class transforms and other functional transforms would be confusing and break consistency. I also think that using functional transforms is closer to what is done in slow processors, so maybe easier to adapt for contributors when they need to implement "exotic" fast image processing operations.

That said, I might be misunderstanding or missing something if this is a standard approach in other image preprocessing library, so happy to discuss this further.

@yonigozlan yonigozlan force-pushed the improve-fast-image-processor-base branch from 867b1f5 to 3f8674f Compare December 11, 2024 22:15
Copy link
Member

@zucchini-nlp zucchini-nlp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very cool PR, happy to see fast image processors getting propagated!

I left a few questions here and there, mostly nits. Overall I like the "less code for users" and moving redundant parts for the base class idea. But seems like we are hiding the core processing logic in base class. So I agree here with @qubvel that having a pipe of transforms looks nicer and easier to consume.

And for weird model, prob we don't need separate patching mixin, as it doesn't scale much because not many of those patch methods will stick for long time and new methods will continue to be invented. I am also thinking if all this repeated code can be squeezed when we add modular to those models? For example llava-next style patching is used in three models, so we need to define it once and then let modular copy the main code body

src/transformers/image_processing_utils_fast.py Outdated Show resolved Hide resolved
src/transformers/image_processing_utils_fast.py Outdated Show resolved Hide resolved
src/transformers/image_processing_utils_fast.py Outdated Show resolved Hide resolved
@yonigozlan yonigozlan force-pushed the improve-fast-image-processor-base branch from f12c5d7 to da6de2e Compare December 16, 2024 19:35
@yonigozlan
Copy link
Member Author

Hello @qubvel @zucchini-nlp ! Thanks again for your reviews. I added examples of how I would write processors using class-based transforms, and a section in the PR with the advantages and drawbacks I see for functional vs class transforms. If you have some time I would be glad to have your opinions on this :)

Copy link
Member

@qubvel qubvel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work! Thanks for providing both options for review 👍 I agree, the functional approach looks more flexible right now.

I would differentiate image processors for the following cases:

  • Basic image transforms: operate only on raw images, and usually, in the original code, we might see the torchvision defined preprocessing pipeline.
  • Patching transforms: operate on patches but still work with images only. The preprocessing usually looks as follows: split into patches -> apply basic image transforms to patches (normalize, rescale, etc.).
  • Annotation-aware image transforms: operate on images and annotations (object detection, image segmentation). These are the most complicated, I suppose, because for each transform, we have to handle annotation transformation like bounding boxes/masks/keypoints.

It migth be oversimplified, am I missing something?

I think we can have different styles of image processors for these cases. And the thing we have to take into consideration is that it's common to have transforms written in torchvision, so it would be nice to have a clear way to port the original transforms to be transformers-like.

@yonigozlan
Copy link
Member Author

Great work! Thanks for providing both options for review 👍 I agree, the functional approach looks more flexible right now.

I believe so too for the functional approach. Using class transforms might look nicer in some cases, but they are limiting and add a layer of complexity for contributors that I don't think is necessary.
I can leave the two class transforms based fast image processors (ViTImageProcessorFast and LlavaOnevisionImageProcessorFast) as they are for now, in case future reviewers have stronger opinion on this, but I'll focus on improving the functional based ones.

I would differentiate image processors for the following cases:

Basic image transforms: operate only on raw images, and usually, in the original code, we might see the torchvision defined preprocessing pipeline.
Patching transforms: operate on patches but still work with images only. The preprocessing usually looks as follows: split into patches -> apply basic image transforms to patches (normalize, rescale, etc.).
Annotation-aware image transforms: operate on images and annotations (object detection, image segmentation). These are the most complicated, I suppose, because for each transform, we have to handle annotation transformation like bounding boxes/masks/keypoints.
It migth be oversimplified, am I missing something?

I think we can have different styles of image processors for these cases. And the thing we have to take into consideration is that it's common to have transforms written in torchvision, so it would be nice to have a clear way to port the original transforms to be transformers-like.

Agreed on the three different categories, but I'm not sure how to handle this differentiation in the code.
I think having the BaseImageProcessorFast as the template for the first category (as it is currently) is a nice way to reduce diffs and force consistency. For the two others, as we discussed before, having a mixin for each doesn't seem to be the way to go, as they are so many slight variations between all models belonging to each categories that it would be impossible to have a Mixin that generalizes to all model in the category and will stand the test of time.

What I was thinking was to add some guidance in the fast image processor file generated by the transformers-cli, explaining what existing processor to use as a baseline depending on what sort of image processing the model needs

@yonigozlan yonigozlan changed the title [WIP] Refactoring of ImageProcessorFast Refactoring of ImageProcessorFast Jan 6, 2025
@yonigozlan
Copy link
Member Author

@ArthurZucker, @qubvel , @zucchini-nlp Here's a new iteration where I implemented @ArthurZucker suggestions.
Now the only function fast image processors need to override in most cases is _preprocess. The only exceptions (for now) are the DETR processors as they work a little differently for backward compatibility, and Qwen2VL as it takes images and videos as inputs.
I'm also waiting on this PR #35122 to uniformize the Pixtral image processor.

The "pre"-preprocessing functions now can handle arbitrary kwargs, as long as they are specified in the valid_extra_kwargs attributes of the child class.
Remaining issues that I see are the class docstrings that are less informative now, as the extra kwargs can't be in the docstring anymore. To mitigate this I wrote detailed doctrings for image processors that override _preprocess, which include descriptions of the extra kwargs.

@yonigozlan yonigozlan removed the request for review from stevhliu January 22, 2025 18:53
Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Much better thanks!
Would be nice to use Unpack[DefaultFastImageProcessorKwargs] rather than having to move around the 20 + arguments for both init and forward

@yonigozlan
Copy link
Member Author

Would be nice to use Unpack[DefaultFastImageProcessorKwargs] rather than having to move around the 20 + arguments for both init and forward

Agreed but if I do that the CI won't let me use a start docstrings describing all the parameters, and the type hints when instantiating an image processor will be much less useful. What do you think we should prioritize @ArthurZucker? Or do you see a way to use both Unpack[DefaultFastImageProcessorKwargs] and have a useful docstring?

Copy link
Collaborator

@ArthurZucker ArthurZucker left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Super cool! Last nits IMO and we should be alright

}
data_format = kwargs.pop("data_format", None)
data_format = data_format if data_format is not None else ChannelDimension.FIRST

images = self._prepare_input_images(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here you will have a n issue: you are poping way too many time from the kwargs, so you pop then you use self, values will be different.
This should be simplified. Just update self with kwargs, then do the rest with always self. something like that

crop_pct=crop_pct,
**kwargs,
)
def __init__(self, **kwargs: Unpack[valid_init_kwargs]):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def __init__(self, **kwargs: Unpack[valid_init_kwargs]):
def __init__(self, **kwargs: Unpack[ConvNextFastImageProcessorInitKwargs]):

this is better and explicit

@@ -114,8 +95,8 @@ def __init__(
overridden by `crop_pct` in the`preprocess` method.
""",
)
def preprocess(self, *args, **kwargs) -> BatchFeature:
return super().preprocess(*args, **kwargs)
def preprocess(self, images: ImageInput, **kwargs: Unpack[valid_preprocess_kwargs]) -> BatchFeature:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment let's be explicit if possible

Comment on lines 311 to 314
valid_init_kwargs = DeformableDetrFastImageProcessorInitKwargs
valid_preprocess_kwargs = DeformableDetrFastImageProcessorPreprocessKwargs

def __init__(self, **kwargs: Unpack[valid_init_kwargs]) -> None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same comment

@yonigozlan yonigozlan merged commit fa56dcc into huggingface:main Feb 4, 2025
23 of 25 checks passed
Comment on lines +99 to +100
valid_init_kwargs = LlavaNextFastImageProcessorInitKwargs
valid_preprocess_kwargs = LlavaNextFastImageProcessorPreprocessKwargs
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is probably a ditto

Comment on lines +258 to +260
valid_init_kwargs = DefaultFastImageProcessorInitKwargs
valid_preprocess_kwargs = DefaultFastImageProcessorPreprocessKwargs

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be removed!

elvircrn pushed a commit to elvircrn/transformers that referenced this pull request Feb 13, 2025
* add init and base image processing functions

* add add_fast_image_processor to transformers-cli

* add working fast image processor clip

* add fast image processor to doc, working tests

* remove "to be implemented" SigLip

* fix unprotected import

* fix unprotected vision import

* update ViTImageProcessorFast

* increase threshold slow fast ewuivalence

* add fast img blip

* add fast class in tests with cli

* improve cli

* add fast image processor convnext

* add LlavaPatchingMixin and fast image processor for llava_next and llava_onevision

* add device kwarg to ImagesKwargs for fast processing on cuda

* cleanup

* fix unprotected import

* group images by sizes and add batch processing

* Add batch equivalence tests, skip when center_crop is used

* cleanup

* update init and cli

* fix-copies

* refactor convnext, cleanup base

* fix

* remove patching mixins, add piped torchvision transforms for ViT

* fix unbatched processing

* fix f strings

* protect imports

* change llava onevision to class transforms (test)

* fix convnext

* improve formatting (following Pavel review)

* fix handling device arg

* improve cli

* fix

* fix inits

* Add distinction between preprocess and _preprocess, and support for arbitrary kwargs through valid_extra_kwargs

* uniformize qwen2_vl fast

* fix docstrings

* add add fast image processor llava

* remove min_pixels max_pixels from accepted size

* nit

* nit

* refactor fast image processors docstrings

* cleanup and remove fast class transforms

* update add fast image processor transformers cli

* cleanup docstring

* uniformize pixtral fast and  make _process_image explicit

* fix prepare image structure llava next/onevision

* Use typed kwargs instead of explicit args

* nit fix import Unpack

* clearly separate pops and gets in base preprocess. Use explicit typed kwargs

* make qwen2_vl preprocess arguments hashable
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants